playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
Introduction_to_Political_Philosophy_with_Steven_B_Smith
3_Socratic_Citizenship_Platos_Crito.txt
Professor Steven Smith: Okay I want to begin with a question today, I have a question for you; well you've been reading the Apology, you've now read the--you've read the Apology and the Crito; you've had a little chance to think about these works. I'd just like to do a piece of survey research, how many of you, just a show of hands is all I need; how many of you believe Socrates is innocent and should be acquitted? Okay and how many of you believe he is guilty and more or less got what he deserved? Higher please, okay. Not exactly the same proportion, I think, somewhat a greater number believe in his innocence than in the Athenian jury obviously. But let me just ask you in the brown shirt, just curious, why do you think he is innocent and should be acquitted? Student: Well I felt that he [inaudible] and it seemed to me that the [inaudible] more on personal views [inaudible] and not exactly by concrete charges. Professor Steven Smith: And I noticed you had your, yes why do you believe he was guilty and got what he deserved? Student: [Inaudible] what is just isn't somehow [inaudible] what is just is what society agrees [inaudible] and I mean he was going against people who had the authority to define words like in [inaudible] what is just is what society says is just and society says [inaudible]. Professor Steven Smith: Okay so as Lincoln once said, both of you can't be right, neither of you may be right, but both of you can't be right. So this is a question that I want to continue today, to consider what the trial of Socrates means and I want to begin by going back to a problem or a paradox that I ended the class with last time. That is to say that Socrates proposes, right, a new conception of what it is to be a citizen, he opposes, we have seen, the traditional, you might say Homeric conception, of the citizen, certain notions of citizen loyalty and patriotism, created, shaped by the poetic tradition going back to Homer. He wants to replace that with a new kind of, I want to call it rational citizenship, philosophical citizenship. A view of citizenship that, again, relies on one's own powers of independent reason and judgment and argument and in the course of defending this point of view, Socrates says, in an interesting passage, that he has spent his entire life pursuing private matters rather than public ones and has deliberately avoided public issues, issues of politics and that raises a question. How can a citizen, how can this new kind of citizenship that he is proposing, how can any kind of citizenship be devoted just to private matters and not public? Citizenship seems to require even the public sphere, the public realm. What does Socrates mean when he says his way of life has been devoted almost exclusively to private rather than to public matters? Well, the first thing we might think about is whether that's entirely true, whether he's being entirely candid with his audience; after all, the kind of investigations, the kind of interrogations that he has been pursuing since going to the Delphic Oracle and then following at least his interpretation of its mandate, these investigations of the politicians, the poets, the craftsman and the like. He says these have been carried out in public, he has gone around in the market and in the open and in the public forum questioning, interrogating and obviously making a variety of people look foolish. So this is hardly simply a private question or a private way of life but perhaps he means simply that by pursuing a private life that again he's going to rely almost exclusively on his own individual powers of reason and judgment, not to defer or rely on such public goods as custom, as authority, as tradition, things of this sort. But I think Socrates means more than that, more than simply he wishes to rely on the powers of private individual judgment. When he says that his way of life has been private, he means that he has pursued a policy of, let's call it "the principled abstinence from public life." Socrates is a great abstainer, he has abstained from participation in the collective actions of the city, actions that he believes could only entail a complicity in acts of public injustice. His own motto, if you want to ascribe him a motto, seems to be a variety of the Hippocratic Oath, you know, that doctors are famous for: "do no harm." And to do no harm he has required of himself a kind of principled abstention from public life. If George Bush described himself not long ago as the decider, you might call Socrates the abstainer. But what does he mean by or what do I mean by referring to his policies of abstention from political life? Do you remember he gives a couple of examples of this sort? One of them, remember, concerned his refusal to join in the judgment to condemn and execute the ten Athenian generals who had failed to collect the corpses, the bodies, of the men lost in a particular battle during the Peloponnesian War? This was a mark of great shame and disgrace. This was an actual event. There was a kind of judgment of collective guilt and they were all executed there, the leaders, the generals of this particular battle and Socrates tells how he refused to engage in that kind of--to join the court in the judgment of their collective guilt, a true incident. And the second story you remember from your reading of the book was his telling, reminding the jury how he refused to participate. He was ordered by the Thirty, the hated Tyranny of the Thirty, he was ordered to assist in the arrest of a man known as Leon of Salamis, an arrest that would have and did in fact lead to Leon's execution and Socrates tells how he at considerable risk to himself refused to participate in the arrest of this man. In both of these cases, I take it, Socrates' point is that his own individual moral integrity stands as a kind of litmus test, you might say, for whether to engage or disengage from political life. "I was the sort of man," he tells the jury, "I was the sort of man who never conceded anything to anyone contrary to what is just," no doubt also reminding them of his, again, his refusal to bow to the Thirty Tyrants in the case of Leon of Salamis. But this raises, I think, the central or a central point about Socratic citizenship or Socrates' view of citizenship, this kind of principled disobedience to the law, something like Thoreau's model of civil disobedience. Does this policy of principled disobedience, you might say vindicate or indict Socrates of the charge of corruption and impiety that has been brought against him? Can a citizen he affirms, I will ask though, can a citizen put his own conscience above the law as Socrates seems to do? This is a problem that we will see considerably later in the term that vexes a very important political thinker by the name of Hobbes about whether an individual can somehow put their own sense of conscience or moral integrity even above the law. What would a community of Socratic citizens look like, each one picking and choosing, you might say, the laws or the rules to obey or to follow or not to follow. Socrates is so concerned, it seems, with his individual, his private moral integrity that he says in a sense to the city of Athens, to the court, to the Athens, to the Assembly or the courts that he will not dirty his hands with public life and again this is a question that we will see later on that Machiavelli takes very seriously--the question of whether or not politics, political life requires one to dirty one's hands in the world. What kind of citizen is it, is he or she who abstains from, maybe even rejects, the harsh necessities, requirements of political life? Socrates seems to be in some respects an example of what Hegel in the nineteenth century described as a beautiful soul, you know, someone who and he used that term ironically I should say, someone who puts their own private moral incorruptibility above all else and we all probably know or have read about people like this. How does Socrates answer these charges of, in a way being not just an abstainer but he kept putting his own private moral conscience or integrity over and above the law? He tries to defend his point of view by arguing in a famous passage that his policy of abstinence actually carries important benefits to the city. He brings with it important benefits and in the passage that I'm referring to, he defines himself as a gadfly, everyone will remember that, the gadfly who improves the quality of life in the city. In section 30d, Socrates writes, let me read the passage. "So I, men of Athens are now too far from making a defense speech on my own behalf, I do it rather," he says, " on your behalf. What I say, I say for you," he appears to say, "so that you do not do something wrong concerning the gift of the god," referring to himself, "the gift of the god by voting to condemn me. For if you kill me," he continues, "you will not easily discover another of my sort who even if it is rather ridiculous to say so, has simply been set upon the city by the god as though upon a great and well-born horse who is rather sluggish because of his great size and needs to be awakened by some gadfly. Just so in fact the god seems to me to have set me upon the city as someone of this sort. I awaken and persuade and reproach each one of you and I do not stop settling down everywhere upon you the whole day." So here we have the example of Socrates telling us not only declaring himself to be the gift of the god who is brought but he is a great benefactor of the city, that his example of the man, of individual moral conscience, brings with it great, as it were, public benefits. It is not on his behalf, he tells the audience, but yours, his fellow citizens' that he does what he does. "You may not like me," he says to the jury, "but I am good for you and furthermore he claims in this what can only be described as sort of quasi-religious language that he has no choice in the matter. This is not something he has chosen to do. He is, as he says a gift from the god, he has been commanded, he argues, to do this. "Men of Athens," he says, "I will obey the god rather than you and as long as I breathe and am able to do so, I will certainly not stop philosophizing." He seems to envelope himself and his way of life with a kind of religious imagery, the Delphic Oracle, the gift of the god image, he envelopes his conception of citizenship within this religious language and this will or should lead any reader of the Apology and any reader of Plato to ask an important question about Socrates' use of this language. We will see it again in different ways in the Republic. Is he sincere in saying this, in making this point or his he somehow being ironical in his use of the religious tone or the religious register? He is, after all, on trial for his life, for the charge of impiety. Would it not seem that in order to rebut the charge of impiety that he would use or adopt a kind of religious language that would resonate with the jury and rebut the accusation, perhaps even suggesting that he is the truly religious and pious one and not the ones like Anytus and Meletus who are bringing charges against him? Socrates seems, or could be seen, to be speaking not just ironically but provocatively in describing himself as a gift of the god. In a sense, you might ask what could be more ludicrous, Socrates declaring himself or anyone declaring themselves to be a gift of the divine. But, right, who would make such a claim? But in another respect he seems to take the divine calling very seriously, right, I mean does he not? It was only when the Delphic Oracle replied to Charephon, he tells that story, that no one was wiser than Socrates, that Socrates undertook this second sailing as it were, his turn away from the investigation of purely natural phenomena to the study of the world of moral virtue and justice. He repeatedly maintains that the path he has taken is not of his own choosing but the result of a divine command. He is under some kind of divine edict and it is precisely his devotion to this divine command, to this particular kind of calling that has led him to neglect his worldly affairs. He reminds, at various points, the audience of his extreme poverty, his neglect of his family and his obligations to his wife and children as well as to suffer the disgrace and the abuse that is directed against him by various public figures, he tells us. All of this is the result of his devotion to the divine command. He presents himself, in other words, as a human being of unparalleled piety and devotion who will risk life itself rather than quit the post that has been given to him. It's a very tall order that he claims for himself. Do we believe him in this respect, I mean an important question, do we believe him again, is he being sincere in this or is he using this as it were a kind of rhetoric with which to envelope himself? What is this peculiar kind of piety that he claims to practice? In many ways, in replying to the jury's verdict in the request that he cease philosophizing, Socrates explains himself in the following terms. Let me just quote one other passage briefly from the second speech that he gives to the jury after his conviction. "It is hardest of all to persuade you, to persuade some of you about this," he says, about his way of life. "For if I say that this is to disobey the god and because of this it is impossible to keep quiet, you will not be persuaded by me on the grounds that I am being ironic. And on the other hand," he says, "if I say that this even happens to be a very great good for a human being that is to make speeches every day about virtue and that the unexamined life is not worth living for a human being, you will still less be persuaded by me." In other words, what he seems to be saying in that passage at around 37c and d is that he realizes he is on the horns of a dilemma. On the one hand, he says, his reference to a divine mission, he explicitly says there, will be taken by his audience as being just another instance of Socratic irony and insincerity. But, he says, if he tries to persuade people of the goodness and the justice of his way of life on simply rational grounds alone, to persuade them that the examined life alone is worth living, he says he will not be believed. So, what you might say is a Socratic citizen to do, he will either be accused of being ironic and not be believed or he will simply be disbelieved if he attempts to defend himself on rational or philosophical grounds. That raises the question, I think, that I began the class with today. Should Socrates be tolerated, would a good society tolerate Socrates? This is the question raised by this dialogue in the Crito as well. How far should freedom of speech and that is to say speech that borders on, even verges into, civic impiety, how far should such speech be tolerated? It's been an assumption of readers of Plato over the years that the trial of Socrates, that the execution of Socrates, presents the case for the fullest liberty or freedom of thought in discussion in the evils or the dangers to a society of trying to persecute or suppress freedom of speech. But is this right, in other words, is that really Plato's teaching? Among the things Socrates says he cares deeply about is his calling, as he puts it, to do nothing but persuade you both younger and older not to care for your bodies and money but how your soul will be in the best possible condition. How are we to understand this case about toleration and freedom of speech? The Apology presents Socrates right as presenting the most intransigent case for the philosopher as a radical critic or questioner of society. Socrates demands that the Athenians change not simply this or that aspect of their policy but he demands nothing less than a drastic, I would even say revolutionary, change in Athenian civic life, in Athenian civic culture. He tells his fellows citizens, right, that their lives are not worth living, only the examined life is worth living and you are not living examined lives therefore your life cannot possibly have any value to it. Even when presented with the option to cease philosophizing, he refuses to do so on the ground that, again, he is acting under a command, divine command and cannot do otherwise. Is Plato asking us to regard Socrates as a man of high principle, standing up for what he believes in the face of death or as a kind of revolutionary agitator who cannot and should not be tolerated by a society whose basic laws and values he will not accept? To some degree, I am inclined to answer that both of those questions have something to them. Maybe the answer, or an answer, to this question is revealed in the Crito, the companion dialogue, the companion speech that goes along with the Apology, although it typically gets much less attention than the Apology. In part, because I think the dialogue presents, as it were, the city's case, the case of the city against Socrates, I mean to consider some of the following. If the Apology presents the philosopher's case against the city, Socrates' case against the city, the Crito presents the city's case against the philosopher. Here, Socrates makes the case against himself, you might say he makes the case against himself better than his accusers in the courtroom did. So in the Apology, the speech between Socrates and the laws that form, as it were the kind of central action of the dialogue, presents the case that Meletus and Anytus should have made against him. While the Apology seems to denigrate the political life as requiring complicity in injustice and Socrates says he will have no part of laws or policies that entail injustice, the Crito makes the case for the dignity of the laws, the dignity or majesty of the city and its laws. While the Apology defends, again, a politics of principled abstinence or disobedience to the political life, the Crito makes the most complete and far-reaching case for obligation and obedience to the law that has perhaps ever been made. So how do we reconcile, if we can, these two apparently contradictory points of view in these two dialogues? These two dialogues, it should be evident, I mean, differ not only in content but in their dramatic context. Just consider, again, some of the following. The Apology is a speech given before a large and largely anonymous audience of over 500 persons, the Assembly, the Court. We see Socrates addressing, the only time in any platonic dialogue, an audience of this size. The Crito, on the other hand, is a conversation between Socrates and a single individual, only one person. The Apology takes place in the Court of Athens, the most public of settings, while the Crito occurs within the darkness and confinement of a prison cell. The Apology shows Socrates defending himself and his life as a gift of the god that most truly benefits the city but in the Crito, we see him bow down to the authority of the laws that he seems to have previously rejected and finally if the Apology presents Socrates as the first martyr for philosophy, the first person to die for the cause of philosophy, the Crito shows Socrates' trial and sentence as a case of justice delivered. These huge contrasts, again, they force us to ask a question, what is Plato doing in presenting these two very different points of view, what is his point in presenting these two works with two such sharply contrasting perspectives on the relation of Socrates to the city? Was Plato confused, was he contradicting himself, was he--what was he doing? Big question. I hope I have time to answer it. So let's look into the Crito just a little bit. Crito is named for a friend and disciple of Socrates who at the outset of the dialogue is sitting as a watchful guardian over his mentor. He urges Socrates to allow him to help him escape. The jailers have been bribed and escape would be made easy but rather than trying to convince Crito directly, Socrates creates a dialogue; actually, you might say a dialogue within the larger dialogue, a dialogue between himself and the laws of Athens where he puts forward the case against escape, that is to say the case against disobedience to the law and the argument could be summarized as follows. No state can exist without rules. The first rule of any state is the rule that citizens are not free to set aside the rules, to choose among them which ones to obey and to disobey. To engage in civil disobedience of any kind is not only to call this or that rule into question but it is to call into question the very nature of law, the very question of the rules. To question or disobey the law is tantamount to destroying the authority of the law. The breaking of so much as a single law constitutes the essence of anarchy, constitutes the essence of lawlessness, it is a far-reaching argument for obedience to the law. The breaking of even a single law calls into question the authority of law as such. It's a very powerful argument that, in a way, Socrates makes against himself, putting that speech in the mouth of the laws. But he goes even further than this. The citizen, he says, owes his very existence to the laws. We are what we are because of the power and authority of the laws, the customs, the traditions, the culture that has shaped us. The laws, he says, have begat us and the use of the term "begat" in our translation is clearly intended to resonate with something you might say we might think of as something biblical about it. The citizen is, in a word, created, begat by the laws themselves, they exercise a kind of paternal authority over us such that disobedience to any law constitutes an act of impiety or disrespect of the oldest things around us. The laws are not only like our parents, they are like our ancestors, the founding fathers, as we might say, who are owed respect and piety. In many ways, the Crito, in some respect, is the platonic dialogue about piety. Socrates seems to accept here entirely the authority of the law; he does not offer arguments for non compliance as he does in the Apology, so what happened all of a sudden to Socrates, the apostle of civil disobedience, Socrates the apostle of principled abstention? He accepts entirely, or the laws force him to accept entirely, the covenant that every citizen has with the laws that binds them to absolute obedience. The question is, why does Socrates exhibit such proud defiance and independence of the laws in the Apology, and such total, even kind of mouse-like, acquiescence to the laws in the Crito? What happened to him, I mean why does he all of a sudden become so humble and acquiescent? What happened to his language about being the gift of the god? Well, that's something I want you to think about and maybe I'm sure you'll want to talk about in your sections, but let me propose something like the following to answer or at least to respond to this paradox, this question. The Apology and the Crito represent a tension, they represent even a conflict between two more or less permanent and irreconcilable moral codes. The one represented by Socrates regards reason, that is to say, the sovereign reason of the individual as the highest possible authority. It is the philosopher's reliance on his own reason that frees him from the dangerous authority of the state and safeguards the individual from complicity in the injustice and evils that seem to be a necessary part of political life. Here is Socrates, the principled abstainer, but the other moral code is represented by the speech of the laws where it is the laws of the community, its oldest and deepest beliefs and institutions, its constitution, its regime as we would say, its politea, that are fundamentally obligatory on the individual and even take priority over the individual. The one point of view takes the philosophic life, the examined life, to be the one most worth living; the other takes the political life, the life of the citizen engaged in the business of deliberating, legislating, making war and peace as the highest calling for a human being. These constitute two irreconcilable alternatives, two different callings, so to speak, and any attempt, I think, to reconcile or to synthesize these two can only lead to a deep injustice to each. Plato seems to believe that each of us must choose somehow, must choose between one or the other of these two contenders for the most serious and worthwhile way of life. Which do we take, which is the matter of ultimate concern or care for us? Which? But we cannot have both and I think that distinction to some degree captures the differences set out when I asked at the beginning of the class about who believes Socrates is innocent and should be acquitted and who believes he is guilty and should be condemned between a philosophical and a political point of view. And, in a sense, one could say maybe this is not Plato's last word, I mean why does Socrates choose to stay and drink the hemlock? After all, if he is committed fundamentally to the principles of his own reason, still why should he care that much about the laws of the city, why not let Crito help him escape and go to Crete where he can drink the good wine of Crete and enjoy his old age? And in fact, Plato wrote another dialogue, his largest dialogue, a book called The Laws, where you see a man simply designated as the Athenian stranger living in Crete and carrying on a conversation with representatives of that society and that might be, although he is not identified as Socrates, it is sometimes thought here is the kind of speech or discussion Socrates would be having, had he escaped. But it gets back to the question, are the reasons Socrates gives Crito for refusing to escape, the reasons he puts in the mouth of the laws of the city of Athens, are those Socrates' true reasons? Does Socrates believe that speech that he constructs between himself and the laws or is it simply a fiction that he creates for the sake of relieving his friend of the guilt he evidently feels for being unable to help Socrates? Crito is, of course, very concerned with what people will think of him if it becomes known that he has somehow not helped Socrates to escape. Is that speech for the law, with the laws, really intended for the benefit of Crito, rather than an expression of Socrates' deepest opinions about the questions of obligation and obedience? Is he, in that speech, bestowing as it were a kind of justice to Crito to reconcile him to the laws of the city and to give him reasons, you might say rational considerations, for continued obedience to the law? In many ways that would seem to make a certain sense of the apparent discrepancy between these two dialogues. It demonstrates not only Socrates' sense of his superiority to the laws of Athens. In the first speech of the Apology, he defies the city to put him to death by expressing indifference to death and then in the Crito, he very much expresses that indifference to death by refusing to allow Crito to let him escape. Socrates seems to remain, even until the end, very much a kind of law unto himself while at the same time, again, providing Crito and others like him an example of rational and dignified obedience to the law. When we look at the death of Socrates, do we think of it as a tragedy, as a moral tragedy, a just man sentenced to death by an unjust law? I don't think so. Far from it. Socrates' death at the age of 70 was intended by him as an act of philosophical martyrdom that would allow future philosophy to be favorably recognized as a source of courage and justice. In one of his later letters, Plato refers to his depiction of Socrates, as he says his attempt to render Socrates young and beautiful, that is he consciously set out to beautify Socrates, presenting a man, fearless before death, refusing to participate in any active injustice while dispensing wisdom and justice to those who will listen. We don't know the real Socrates, all we know of Socrates is what we read in Plato and Aristophanes and a small number of others who have sketched various different pictures of him. But Plato's Socrates is necessarily poles apart from Aristophanes' Socrates depiction of him as a sort of sophist who makes the weaker argument the stronger. Plato's dialogues, the Apology as well as the Republic and the Crito are in the broadest sense of the term, an attempt not only to answer the charge against Aristophanes but also defend the cause of philosophy as something of value and merit. Where does that leave us today? What are we to make of all this? We, who live in a very different kind of world from that of, you know, fourth-century Athens, what can we learn from the example of Socrates? Most of us like most of you earlier, find ourselves instinctively taking the side of Socrates against the city of Athens. Those who might defend the city of Athens against Socrates, those who believe in the value of civic piety are very few among us. Perhaps only those of you who might come from a small town in the south or from certain areas of Brooklyn would understand something about the supreme value of piety as a way of life. We, by and large, tend to accept the picture of Socrates as a victim of injustice. We overlook, we conveniently overlook a number of facts about him, his hostility to democracy, we'll see that in the Republic but we've seen it already to some degree in the Apology. His claim that the lives of his fellow citizens are not worth living and his claim that his way of life has been commanded by a god that no one else has ever heard or seen. None of these seem to make any difference to us and yet I think they should. Given Socrates' claims, ask yourself what would a responsible body of citizens have done, how should they have acted? One answer might be to extend greater toleration to civil dissidents like Socrates. Individuals of heterodox belief but whose own views may stimulate others to question and think for themselves, all to the good, Milton, John Locke, people like Voltaire argued something like this. But is that to do justice to Socrates? The one thing that Plato does not argue is that Socrates should simply be tolerated. To tolerate his teaching would seem to trivialize it in some sense, to render it harmless. The Athenians at least pay Socrates the tribute of taking him seriously, which is exactly why he is on trial. The Athenians refuse to tolerate Socrates because they know he is not harmless, that he poses a challenge, a fundamental challenge to their way of life and all that they hold to be noble and worthwhile. Socrates is not harmless because of his own professed ability to attract followers, a few today, a few more tomorrow. Who knows? To tolerate Socrates would be to say to him that we care little for our way of life and that we are willing to let you challenge it and impugn it every day. Is that good, is that right? The trial of Socrates asks us to think about the limits of toleration, what views, if any, do we find simply intolerable? Is a healthy society one that is literally open to every point of view, freedom of speech is naturally a cherished good, is it the supreme good? Should it trump all other goods or does toleration reach a point when it ceases to be toleration and becomes in fact a kind of soft nihilism that can extend liberty to everything precisely because it takes nothing very seriously. And by nihilism, I mean the view that every preference, however squalid, base or sordid, must be regarded as the legitimate equal of every other. Is this really tolerance or is it rather a form of moral decay that has simply decided to abandon the search for truth and standards of judgment? There's a danger, I think, that endless tolerance leads to intellectual passivity and the kind of uncritical acceptance of all points of view. Well so much for that. What I want to do, I see we're running out of time, is if you could think about it, maybe hold that thought in your mind once in a while between now and Wednesday and on Wednesday, we will begin reading what is arguably, some people believe, the most important book ever written, Plato's Republic. See you on Wednesday.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
16_Constitutional_Government_Lockes_Second_Treatise_712.txt
Professor Steven Smith: Today we want to begin with Mr. Locke, Part II. And I said, at the end of class last time, I want to speak a little bit about Locke and let's just call it the spirit of capitalism. And then I want to move into the issue of government by consent, along with the idea of natural law, perhaps one of Locke's clearly central, perhaps most significant contribution to political philosophy, the Doctrine of Consent. And various problems I wanted to examine with you today, associated with consent and what it means to consent to government. But the first five chapters of the Second Treatise, if you take them as a unit, and I think they should be, they tell us a story. Locke presents us, so to speak, with a kind of philosophical anthropology that takes us through the state of nature, the state of war, the creation of private property. And in the fifth chapter particularly, Locke begins as I mentioned last time with, you might say, with the original condition of nature which forms a kind of primitive communism to the creation of property through the labor of one's body and the work of one's hands. And by the end of the fifth chapter, we have the creation of really a kind of full-scale, sophisticated market economy replete with various inequalities, perhaps even some large-scale inequalities of wealth and property, all within the state of nature. How did this occur and most importantly for Locke, what makes this legitimate? What legitimizes this transition, so to speak, from the original state of nature governed by nothing more than the law of nature to the emergence of property and in a way, a kind of market economy? In many ways, what Locke is doing in the first five chapters of the Second Treatise is re-telling or maybe better re-writing the account of human beginnings that had originally belonged to scripture. He tells the story of how human beings finding themselves in a condition of nature with no one or no authority to adjudicate their disputes and governed only by a natural law, how they are, nevertheless, able to create and enjoy the use of property created and acquired through their labor and work. Man, he tells us in these opening chapters, is a property-acquiring animal, the acquisitive animal, even in the state of nature where there is again nothing but the natural law to govern human associations and relations with one another. But the problem with the state of nature for Locke and as to some degree it was for Hobbes as well, is its instability, with no civil authority to umpire disputes, especially disputes over property. The peaceful enjoyment and the further acquisition of property, the fruits of one's labor, are continually threatened by war and by conflict. How can we ever be secure in our person or property with no enforcement agency to resolve breaches of the peace, where everybody is, so to speak, again, the judge and jury and executioner of the natural law? The need for government arises out of the real need to resolve conflicts or disputes over property rights. In many respects, this sounds like a very familiar idea that government exists for the sake of the protection of property rights. It's sort of kind of a cardinal doctrine of what I suppose we would call today libertarianism, the philosophy of libertarianism, so important in a lot of American thought. Locke is, in many ways, the first writer of my familiarity who claims that--these are his words--"the great and chief end of man's uniting into commonwealth is the protection of their property." No one prior to Locke, at least to my knowledge, I think, had ever said in quite such a bold and straightforward way that the purpose of politics was the protection of property rights. And by property, Locke doesn't mean simply objects around us that we have turned into property; but property is rooted, he tells us, first and foremost in our persons, in our bodies. We all begin the life with a certain rudimentary property if only in ourselves. Property, for him, implies more than simply real estate, but everything that encompasses our lives, liberties, and possessions. These are all property in the original, and in many ways, most revealing sense of the term "property," that is to say things proper to us. But Locke continually emphasizes to us the uncertainty of the state of nature because "life there," he says, "is full of fears and continual dangers that lead us to civil association." But think, in a way, how different Locke's account of the transition from the state of nature to the civil state is from Hobbes'. In many ways, again, as I said, Locke tries to modify, domesticate, ameliorate Hobbes' harsh teachings. Hobbes had emphasized the absolute fearfulness of the state of nature. The state of nature was, for Hobbes, a kind of state of existential dread, absolute fearfulness. For Locke, however, it is a condition continually beset by unease and anxiety; to use the word he often uses, inconveniences. The state of nature is one that consists of continual inconveniences. It is our unease, our restlessness that is not only a spur to our labor, but is rather the cause of our insecurities that we have in the state of nature. What is it about Locke, what is it about his account? I don't mean what is it about him in some psychological, personal, or biographical sense, but what is it in his writing that leads him to emphasize the restlessness, uneasiness, and you might say perpetually anxious character of human beings in the state of nature? Do we ever hear Plato or Aristotle discussing the fearful or anxious or restless character of human psychology? I think not. Was this simply a function of Locke's nervous disposition? Was the fact that he was simply prone to reticence and a kind of fearfulness in the same way that Hobbes himself said? Or does Locke's emphasis on the uneasiness of our condition in the natural condition really represent the qualities of a new class, the new commercial classes as it were seeking to establish their legitimacy? Locke's Second Treatise in many respects is a work of middle class or as the Marxist would like to say, perhaps the bourgeois ascendancy. When Locke writes, as he does, that the world is intended for the use of the industrious and the rational, who was he talking about there? Who are the industrious and the rational? He is speaking about a new middle class ethos whose title to rule rests not on heredity or on tradition; he is not referring to a customary ruling class, a class whose title to rule comes from its claims to nobility. But he's referring to people whose title to rule or potential title to rule rests on their capacities for hard work, thrift, and opportunity. As a former student of mine who took this class once said, Locke's Second Treatise could well be called the Capitalist Manifesto, or the Anti-Communist Manifesto maybe, one could put it. But is Lockeanism simply Machiavelli with a human face? Put it that way. Isn't the rule of The Prince in Machiavelli the rule of a new leader, a new authority in some sense who operates outside the parameters of traditional authority? Isn't Lockeanism like Machiavelli in some way the ethic of the self-made man with all of the insecurities and anxieties, restlessness that being self-made represents? Does Lockeanism represent in some ways the tranquilization of Machiavelli, turning Machiavelli's fierce warlike ethic, the ethic of conquest and domination to, in fact, the ethic of work and as it were, the conquest and domination of nature through labor and our hard work? Is this, again, simply Machiavellianism with a human face? But in any way, I think, or what I want to suggest is that Locke's political philosophy gives expression to what the great German sociologist Max Weber, you know him, yes, Weber? You read him in Intro Sociology, no? Okay, well, you will. Weber, his famous book called The Protestant Ethic and the Spirit of Capitalism, 1904, great work, classic work. In that work, Weber argued that the capitalist ethic made a high duty, a moral duty, turned it into a moral calling, a religious calling, the duty of limitless accumulation of capital and this was, in Weber's terms, the outgrowth of the Puritan and Calvinist movements of the sixteenth and seventeenth centuries. For Weber it was the Protestant reformation that had taken root in the countries of Northern Europe and particularly where the roots of this capitalistic ethos first developed and again adopted a wholly new moral attitude to such things as property, property acquisition and moneymaking. Previously, these things had been deemed to be morally dubious, shunted aside, there's something shameful about this. You can certainly see this in the classical writings, political philosophy that we did. For these early moderns, capital accumulation became a kind of high calling and moral duty. God gave the world to the rational and the industrious, not, he says, to the quarrelsome and contentious; not, that is, to those prideful aristocrats who seek to struggle for domination and power over one another. What Locke brings into being is, again, a wholly new and revolutionary moral attitude towards property and property acquisition that again finds its expression, great expression a century later with Adam Smith. And, of course, from Adam Smith we have the whole world of modern economics. So, how many of you are potential economics majors in here? I bet more than one. Without John Locke, there would be no modern discipline called economics because he was the one, again, a century before Smith and the rise of the school of political economy, who made the first and decisive move which was to, in many ways, make respectable and even more than respectable, turn into a high moral calling and dignity the acquisition of property and turn government, turn politics into a tool for the protection of property and property rights. That is the significance in many ways of what Locke has done. I want to talk, probably not today but next Monday, on some of the in many ways the pros and cons of this immense moral transformation regarding property and economics that Locke has helped to bring into being. But for the rest of today, what I want to focus on is Locke's idea of consent, the idea that the origin of all government, or at least all legitimate government is said to derive from consent, the consent of the governed, an idea that was implicit in some respects in Hobbes' theory of the covenant that creates the sovereign, to which Locke gives, in many ways, much greater pride of place. In chapter 8 of the Second Treatise, Locke gives us there, he provides us with a kind of hypothetical reconstruction of the origin of society, of all societies. He writes in section 95, "The only way whereby anyone divests himself of his natural liberty and puts on the bonds of civil society, is by agreeing with all others to join and unite in a community for their comfortable, safe, and peaceful living." Locke tells us there is something about the legitimate ends of society, the ends that civil society serves, comfortable, safe, and peaceful living. And he goes on to affirm that whenever a sufficient number of people have consented to make a single community, and I quote him again, "they are thereby presently incorporated and make one body politic," they make one body politic, "wherein the majority have a right to act and conclude for the rest." That short statement, section 95 of chapter 8, seems to make maybe the first and most powerful case for democracy. On the basis of that statement, a famous Yale professor of at least a couple of generations ago, wrote an extremely important book that made John Locke into a majority rule Democrat. He said in that book that Locke's philosophy provides the faith of the majority rule Democrat, largely focusing on sections 95,96 as sort of the key to Locke's political teaching in the Second Treatise. Does anybody know the name of that man who wrote that book, by any chance? Famous Yale Political Scientist, back a while ago. Nobody remembers Willmoore Kendall's book on Locke? You know it, yes, you were shaking your head. No, you don't? Okay, whatever. It's not important. Not important. I just mention it in passing. But consider the following sentence, again, also that seems to add to this claim. "For when any number of men by the consent of every individual make a community, they have thereby made that community one body with a power to act as one body which is only by the will and determination of the majority." What are we to make of this assertion and in many ways, continued assertion, that in any community, we are ruled by the majority? To be sure, that idea would have come as an immense surprise, no doubt, to the King of England to learn that his rule was justified by the consent of the governed, or if you had done something like crossed the English Channel and go to the France of Louis XIV of this period, Louis XIV who famously said L'État c'est moi, "I am the state," no doubt would have been very surprised and probably found laughable the idea that his legitimacy came from the consent of his subjects. Who had ever thought such a thing, that government derives from the consent of the majority? Is Locke, in saying that, denying the legitimacy of all government, all governments that do not derive from the consent of the majority? Is he, on the basis of this, truly a kind of majority rule Democrat? Does he undercut, for example, something like Aristotle's argument, who had seen any number of forms of government as equally legitimate in many ways, so long as they are moderate and ruled by law? Or is Locke saying, again, that there is only one form of government, one, again, legitimate or just form of government, government by the majority? That's what he appears here to be saying. I mention the sense appears, Locke is a slippery fish. He's a slippery writer. He has a tendency to take back with one hand what he gives with the other, doesn't he? The agreement to make one community, as he calls it however, is not the same thing exactly as establishing a form of government. In many ways, choosing to have a government, which is what Locke is talking about here in these relevant sections--95 and 96 and so on-- choosing to have a government to be one people, so to speak, is in many ways an act prior to electing any particular form of government to rule you. The Second Treatise, in some way, specifies only that governments derive their just power from the consent of the governed. It does not seem to say very explicitly about what form of government people might wish to consent to. In many ways, the Second Treatise, one wants to say, is even rather neutral to forms of government. The only form of government that seems to be absolutely ruled out on Locke's account is some kind of absolute monarchy. We cannot cede our rights entirely to another individual. But he seems to be relatively open to whatever it is people may wish to consent to. The act of consent alone does not create a government. It is merely an act to form a society. In many ways, he accepts Pope's famous dictum, Alexander Pope's famous dictum: "...for forms of government let fools contest, whate're is best administered is best." In other words, you have the best thing that administers government that protects your rights to property and what form it is, whether it's monarchic, aristocratic, republican or whatever, is not so important. What is important, and for Locke about the only thing that is important, is that that form of government receive the consent of the governed. And, of course, people don't necessarily have to consent to democracy. If Locke is democratic in that way, it is only because he's democratic in a sense that government derives the authority from consent. It does not necessarily have to be democratical in form in that respect. But it is this idea of consent--and you will no doubt talk about this in your sections-- it is this idea of government as being by consent that has so much insinuated itself-- I'm not sure that's the right word -- but has so much formed in many ways the cornerstone of the American regime and American political thought, in many ways, even more, I would suggest, than his doctrine of property and property rights. Locke's Doctrine of Consent is what captured the imagination of the American founders. When Jefferson wrote about the ends of government, he said the ends of government are to protect life, liberty, and the pursuit of happiness. He seems to have modified Locke's statement about life, liberty, and estate. Why did he do that? We could talk about that and of course, what is meant by the pursuit of happiness certainly is intended to entail, among other things, the acquisition of property. But Jefferson in some ways sort of elevates Locke's language, Lockean language; it is not simply focused on property but the pursuit of happiness in many ways however construed consistent with the rights of others. But it is Locke's language of consent that just powers of governments derive from consent that seems to have most inspired Jefferson and the founders. And through that doctrine it, of course, had a huge effect on America's greatest second founder, Abraham Lincoln. Consider the following passage from Lincoln. This is Lincoln in 1854 in his first major speech, first most important speech, sometimes called the Peoria Speech, where he was already debating Stephan A. Douglas. It was for the Senate campaign in Illinois, appropriate in our time of the year, where they were then arguing, as they would again for the presidential campaign, argue over slavery and here is what Lincoln writes: "When the white man governs himself, that is self-government; but when he governs himself, and also governs another man, that is more than self-government, that is despotism," Lincoln says. "My ancient faith," no doubt thinking about the Declaration and Jefferson's ideals, "My ancient faith teaches me that there can be no moral right with one man making a slave of another." "What I do say," Lincoln continues here, "is that no man is good enough to govern another without that other's consent." "This," he concludes, "is the leading principle, the sheet anchor of American republicanism." So there is Abraham Lincoln referring to the Doctrine of Consent by which he says that no man is good enough to govern another without that other's consent, calling this the leading principle or the sheet anchor of American republicanism. That statement, as I was suggesting a moment ago, is part of his debate with Douglas over the issue of slavery and it, in many ways, cut to the core of the meaning of consent. Douglas also, in some respects, tried to derive his views from an idea of consent. What Douglas said was that, regarding slavery, he said he didn't care, it was a matter of indifference to him, whether people of a particular state or a territory wanted slavery or didn't want it. Whatever they wanted, that is to say whatever the majority consented to, was all right with him. He might prefer it not to be but again, it was what people consented to, what the majority wanted that would decide the matter. Lincoln, however, had said that the Doctrine of Consent is not simply a kind of blank check, that the Doctrine of Consent still implied a set of moral limits or restraints on what a people might consent to. Consent was inconsistent with slavery, he said, because again, no one can rule another without that other person's consent. And in many ways that crucial debate, so fundamental to American history and politics, grows out of a kind of internal problem within Locke's Doctrine of Consent, namely that problem is, what form of government does it make sense for a majority of people to consent to. Does government, in other words, by consent mean government by whatever the majority wants, could be a kind tyranny of the majority, whatever they want? Or does government by consent entail, again, certain limits and restraints on what majorities can do? What guarantees does Locke provide, you might ask, that government by consent will be informed consent or rational consent? Can people simply consent to anything, to be ruled by any means? This is obviously not an idle or a purely theoretical question since popular majorities we know in the world today, popular majorities can choose, on the basis of whim, will, or some other kind of arbitrary passion. Unless there seems to be some set of moral restraints on what individuals or majorities can consent to, what is to prevent a majority from acting just as despotically or just as arbitrarily as a king or any absolute power? That was the question that Lincoln was raising in his argument against Douglas and his claims about consent. But this question of restraints or limits on what a people can consent to leads to another question about the Doctrine of Consent. How is consent conferred? We are citizens of the oldest democracy in the world, most of us, I guess; maybe not everybody but most people in this room are citizens of the oldest democracy in the world. Did anyone ask you for your consent or me, considerably older? Did anyone ask for my consent? The idea of giving consent to a form of government implies something active, an emphatic voice but has anyone since the first generation of founders who ratified the Constitution ever been asked or required to give their consent to it? You might ask what is Locke's answer to this problem and it is a problem that he is aware of and struggles with in that important chapter. His answer turns out to be something quite different from our views about citizenship and who is a citizen and how is the consent of the citizen conferred on government. In section 118 he writes, "A child is born a subject of no country or government." In other words, he's saying citizenship is not conferred by birth; just being born in a place does not make you a citizen of it as doctrine that we hold. "Every person," Locke continues, "is born free and equal in many ways in a kind of state of nature under the authority only of their parents. What government that person may choose to obey is not a matter of birth, but of choice." And again, Locke seems to be making some kind of active principle of choice or decision, a principle of citizenship and the conferring of consent. And it is only, he says, that when a child reaches what he calls the age of discretion--18 or 21 or something like that--that one is obligated to choose, do some sign or mark of agreement to accept the authority of government. Locke is not altogether clear about how such a sign or a mark is to be given. One suspects from what he is saying, he maybe referring to some kind of oath or some kind of pledge, or some kind of civil ceremony where one vows or pledges with one's word the acceptance of the form of the state. "Nothing can make any man so," Locke writes, that is to say an actual citizen of a state. "Nothing can make any man so, but his actually entering into it by positive agreement, an express promise and compact," he says at section 122. By express promise and compact, "such an express promise or agreement leaves one," he says, "perpetually and indispensably obliged to be and remain unalterably a subject of it," that is to the state. So once you give your word or agreement, Locke says, you are perpetually and indispensably obligated to that state. That's how seriously Locke takes this idea of consent. It's something that can only be entered into at the age of discretion. It must be given consciously, fully, rationally, presumably in some kind of ceremony and once given, your consent to the form of government remains, as he says, perpetual. You are bound unalterably, as he puts it. There's no such thing as taking it back. It shows you how important Locke puts on the word, the oath, or some kind of civil agreement. One's word is one's bond. To give voice or consent to government is not an act to be entered into lightly, he says, or implies but it is a kind of lifetime commitment and it also shows us how different Locke's view of the citizen is from ours. In other words, it would seem, for Locke, the only people who are full citizens in our country would be people who have given their active consent and the only people who have given their active consent are people who have undergone what we interestingly call a kind of a naturalization process. Is anyone here a naturalized citizen, as we call it? Has anyone ever been to a naturalization process? No, nobody? It's administered by a judge and you swear allegiance to the new country? You presumably shed your obligations to your previous country. You swear your allegiance to this one. That seems to be the kind of thing Locke appears to be talking about and it's interesting that, again, the only people in our society, in our country, who would be full citizens would be naturalized citizens. Again, birth alone does not confer on you citizenship of any particular country. But what does that mean for the rest of us, those who have not given their active consent? Locke is aware that not everyone gives their active consent. That's why he introduces another idea for how consent may be given. He talks about what he calls tacit consent. There are those maybe who have not sworn allegiance or given a civil oath but who nevertheless can be said to have given their tacit consent to the form of government and its laws. But how do we give tacit consent? Tacit consent is a strange word because consent implies something active and open, where tacit, think of Locke's taciturnity; tacit implies something closed or concealed. How is tacit consent given? That's a problem you can see Locke working on. To some degree, he says, anyone who simply enjoys the protection of the law, the security of property and person under the law can be said to have given their tacit consent. They give it, so to speak, ex silentio. Even their silence confers consent. But how do we really know? You could say, how do we know that silence confers tacit consent and silence is not simply silence? An example I think of, for example, if you go to a wedding ceremony, or I guess in some wedding ceremonies, the justice or the minister, whoever, says--what is the phrase about whoever hold your peace? If anybody has any question about this ceremony, speak now or forever hold your peace and of course everybody--except in the movies, of course--everybody's always silent. Nobody says it so their consent, their tacit consent is given; their silence from their silence to that question, their tacit consent is given. But again, that would be one way but again, how do we know when silence confers tacit consent or silence is simply that, silence? It's an issue that Locke struggles with and to be sure never fully, or I think satisfactorily, resolves. Maybe you will resolve it. Maybe you will resolve it in your next paper, if you have the opportunity to write about consent and the difference between the expressed and tacit forms for citizenship. Also, the question being--which Locke alludes to but does not fully or does not quite answer-- is there any difference in privileges, in civil privileges between citizens how have given their expressed consent and those who have only been said to tacitly consent to the form of government? Does he suggest that one class of citizens has greater rights or greater responsibilities than the other? You might look into that question too and see if you think Locke suggests any differences on that. To go back and just kind of begin to wrap this up for the day, Locke does not appear to endorse any particular form of government in the Second Treatise. The task of forming the government will fall to the decision of the majority but again, what form the majority will decide remains, to some degree, an open question. What gives Locke or Lockeanism its distinctive tone, its distinctive voice is the claim that whatever kind of government a majority decides upon, it must be one that limits the power of the sovereign. You cannot--and in this respect, I think Locke is far closer to Lincoln than he was to Stephan A. Douglas -- consent does not simply mean consent to arbitrary rule; it does not mean consent for the power of the sovereign to do anything. Locke's theory of constitutional government is a theory of restrained government, of constitutional restraints, of rule by law. Locke gives, in many ways, the importance of law and constitutional restraint; what we would today call, I suppose, limited government. He gives this far greater expression, far more powerful expression than any of his predecessors; certainly not Hobbes, who had attributed absolute power to government or even to Aristotle who, in many ways, shares some resemblances with Locke, but even Aristotle had severe doubts about rule of law. Locke is absolutely confident that limited government, restraints on power--whether that power be from the one, the few or the many--restraints on power is the only kind of government that can be trusted to protect rights. And in one of the few jokes that appears in the Second Treatise -- you might have missed it because Locke is a subtle jokester; he's not like Machiavelli or Plato. Locke is a very understated jokester; he was an Englishman after all. He writes in section 93, referring to Hobbes, but you'll also see his wonderful animal references. He writes: "If men quitting the state of nature entered into society, they agreed that all of them but one should be under the restraint of laws," thinking about Hobbes' Leviathan. "But that he should still retain all the liberty of the state of nature increased with power and made licentious by impunity." He goes on to say, "This is to think that men are so foolish that they take care to avoid what mischiefs be done to them by polecats and foxes but are content, nay think it safety to be devoured by lions," the lion being the Leviathan sovereign; whereas in the Lockean state of nature, human beings are like polecats and foxes. They're noxious creatures, he says, but they're not truly dangerous to you and when one leaves the state of nature to enter civil society, one is certainly not doing so to empower a sovereign with lion-like powers over you, as he says. Who would do this? It's better to have some kind of theory of restrained government, a limited government to do this for you. I'm going to end on this note. What I want to do on Monday, when I wrap up Locke, is to continue with his doctrine of limited government because it turns out there is a very important exception to it. There is a kind of escape clause and I would encourage you as you read it to pay particular attention to chapter 14 of the Second Treatise, his chapter on what he calls prerogative power, the doctrine that has very, very important and grave implications for our politics today. It's a very important chapter and I want us to continue this and then talk a little bit about the pros and cons of Lockean political philosophy.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
18_Democracy_and_Participation_Rousseaus_Discourse.txt
Professor Steven Smith: I hope that everybody followed Rousseau's advice and yesterday exercised your rights as citizens of a free state. We hope so. I begin today with an apology and that is for the particular edition that you're using for this section of the class on Rousseau. This is the only edition that I've assigned in the course that I don't like. Why did I assign it? Because I want us to read the Second Discourse, the Discourse on Inequality and the Social Contract and this is the only edition that I could find where they are both in the same volume and I don't have to assign two separate books. So, in order to keep your costs down, I bit the bullet and assigned a translation in an edition I don't particularly care for. A far superior edition is found in this. This is one of the two volumes, the Cambridge Bluebook series as it's called, edited and translated by Gourevitch. If anybody wants to do more advanced work in Rousseau, you will no doubt get this edition, better translations, better notes and so on, of the Second Discourse and The Social Contract and for anybody who's interested, I've decided because Rousseau has become so important to me, that next year I will be offering an undergraduate seminar entirely devoted to Rousseau. He's one of the handful of writers, of political philosophers, to whom one could, in all justice, devote an entire semester to his writings and that's what I want to do next year. So if any of you should get the bug, the Rousseau bug, next year we'll do Rousseau in many more texts, in detail. So with that having been said, I'm going to talk today about a remarkable, remarkable human being and writer. It's a very common way of entering the thought of Rousseau to see him as a critic of liberalism, of the kind of property owning, rights-based society given expression by John Locke and I will talk about that a little bit later. But to see Rousseau as a critic of Lockean liberalism would be I think very shortsighted and very unfair. Rousseau was a product not of liberal society but rather of the ancien régime, the old regime in France. Rousseau was born in 1712. It was two years before the death of the famous Sun King, Louis XIV, a man who symbolized the age of absolutism, and he died in 1778 approximately a decade before the outbreak of the French Revolution. His life, in other words, was lived entirely within the waning decades, the waning years of the age of absolutism in France and in continental Europe. Rousseau was deeply aware that he lived in an age of transition but what precisely would come after he was by no means clear. He wrote, as you will see, with the passion and the intensity of someone who fully expected to be instrumental in the coming of a new historical and political epoch and indeed he was. Rousseau was Swiss. He was not French. He was a Swiss. He signed many of his most important words simply citoyen de Gèneve, a citizen of Geneva, after the city where he was born. He was the son of an artisan who abandoned his family after a falling out with the local authorities and the young Rousseau was apprenticed to an engraver but he left Geneva; he fled Geneva for good at the age of 16. For the following 16 years, Rousseau lived a kind of vagabond varied life doing many different things, working as a music instructor and a transcriber. He was the secretary to the French Ambassador in the city of Venice and he was also the lover of a wealthy woman many years his senior. After moving to Paris in 1744, Rousseau spent several years eking out a living, sort of on the margins of the Parisian literary scene until 1750 when he published his first, although quite brief major essay, a work called The Discourse on the Arts and the Sciences, which catapulted him to literary fame. That work made his name so it was--it came to be called the First Discourse. That work was followed five years later in 1755 by the work we will be reading starting today, The Discourse on the Origins of Inequality, often simply called the Second Discourse, and that work was followed later on in 1762 with the Social Contract and also the same year that Rousseau published his major massive work on political education called Émile or On Education, both in 1762. During this period, Rousseau fathered five children. He abandoned all of them to an orphanage. He did so with a common-law wife with whom he lived and during this times the writings I've mentioned are only a small portion although a very important portion of the writings which he lived. He was also the author of a very large novel, Heloise, The New Heloise, La Nouvelle Heloise, which was a bestseller in his time and it was a kind of philosophical novel that helped- that explored many of his ideas. He was the composer of an opera, Le Devin du Village, that was performed at the court of Louis XV. He also wrote several very important and interesting volumes of autobiography, the most--the best known of which is simply called Confessions after the work of Saint Augustine of the title--a book of the same name and he also wrote another volume of autobiography in a dialog form called Rousseau Juge de Jean Jacques in which he divides himself up into two different people, Rousseau and Jean Jacques, in a kind of internal interrogation of himself. Rousseau wrote in many and varied genres and his work spans the entire gamut of philosophical, literary and political themes. He was also the writer of different constitutional projects. He was consulted by heads of state in his period and wrote constitutions for Poland and for the small island country of Corsica which he said in the Social Contract was the only place in Europe that one might expect great things and of course he was right as anybody knows, a generation or two later the famous Corsican, yes, who put an end…Who am I talking about? Napoleon, of course. You might say he was right. He helped to substitute the general's will for the general will but that was Rousseau. People have been very baffled by exactly the nature of Rousseau's contributions. What did he believe? What did he stand for? What do his writings represent? Was he a revolutionary whose work helped to inspire the radical phases of the French Revolution? Just remember, for example, you've probably all heard the famous opening sentence of the Social Contract, "man is born free but is everywhere in chains," his appeal to the severe political ethics of ancient Sparta and Rome as well as his belief that the people, in their collective capacity, are the only legitimate source of sovereignty. All of these seemed to pave the way for the revolutionary politics of the late eighteenth century and up into our own time. So is Rousseau a kind of incendiary and revolutionary or did his writings seek to release us altogether from the bonds of society as he appears to do in the Second Discourse, in the Discourse on Inequality? In this work, Rousseau seems to lay the basis for the kind of romantic individualism that would be associated with people like Wordsworth in England or Henry David Thoreau in America. Rousseau's direct appeals to Nature as well as his celebration of the simplicity of peasant life and rural life seemed to open the door later on to writers like Tolstoy as well as to various kinds of social experiments in rural communal utopianism such as, for example, the Israeli kibbutz movement, which is in its own way a direct descendant of Rousseau. So my suggestion is Rousseau's writings are varied and his influence has been manifold, to say the least. He both helps to bring to fruition and completion the political and intellectual movement that we know as the Enlightenment. He brings this to its highest phase of perfection, in many ways, and at the same time he was a severe critic of the Enlightenment. He was a close friend and associate of men like Diderot, who was the general editor of the Encyclopedia, the great French contribution to the age of the Enlightenment, and yet he excoriated the progress of the arts and the sciences and worried about their effect on the moral life of communities. He was a writer who wore different hats. He defended what he called the savage, sauvage, against the civilized man. He took the side of the poor and the dispossessed against the elites and he adopted the posture of the loyal son and citizen of Geneva against the sophisticated Parisian intellectuals of his time. So who was Rousseau and what did he stand for? That's what I want us to begin to try to find out a little about today. The Second Discourse, the Discourse on Inequality, is in the eyes of many readers Rousseau's greatest work. I'm not sure if that's true but many people believe it is. It is what writers in the eighteenth century called a conjectural history. It is, that is to say, a kind of philosophical history, really that is to say a philosophical reconstruction of history, but not of what actually happened in the past. It's not a history of facts and dates but it is a history Rousseau believes of what had to have happened for, in a way, human beings to evolve to their current condition. Rousseau begins the work by comparing the effects of history on us to the statue of Glaucus that he says the winds and storms had so disfigured that it scarcely looked like a human being at all. This is what history and time has done to us. It has so affected and transformed human nature that if we want to understand what human nature really is, he argues, it is necessary to reconstruct it through a kind of thought experiment. Rousseau compares the Second--Oh, God. Oh, dear. All right. Rousseau--Bad to laugh at your own jokes. He compares the Second Discourse to an experiment like those undertaken by physicists and cosmologists in his own day who speculate about the origins of the universe in the same way that he is speculating about the original condition of human nature. That is to say, there is no empirical or physical evidence to draw on to understand how the world was actually framed. We can only make, he says, intelligent guesses, certain inferences or conjectures based on the evidence that we find around us. And so Rousseau remarks, in one of the most arresting passages from his book, and Rousseau was a man known for writing arresting paradoxical and ingenious sentences, he writes, "let us therefore begin by putting aside all the facts, let us put all the facts aside for they have no bearing on the case. The investigation that may be undertaken concerning this subject should not be taken for historical truths," he says, "but only for hypothetical and conditional reasonings." In other words, what he's saying is that the history that he intends to unfold is an experiment much like, again, that undertaken by geologists who try to infer the development of plant or animal life from the existence of certain fossil remains or skeletal remains. And yet, at the same time, while Rousseau speaks of his work as tentative, experimental, conjectural, he has only hazarded some guesses, he writes, you cannot help but be struck by the certain tone of confidence with which he presents his findings. In particular, he discusses and rejects quite emphatically the investigations of his predecessors both ancient and modern. "The philosophers," he writes, "who have examined the foundations of society have all felt the necessity of returning to the state of nature but none of them has reached it," Rousseau says. He believes that he alone has finally, as it were, struck gold. "Oh, man," he exclaims, "oh, man, whatever country you may be from, whatever your opinions may be, listen. Here is your history as I have thought to read it, not in the books of your fellow men who are liars but in nature who never lies." That's a remarkable sentence. For the first time, Rousseau says, human nature will be revealed and the history of civil society explained to us. Listen. Here is your history, as I have read it, not through other books but through nature, he says, that seems to speak directly to me or to which I have an insight. So what is the state of nature, a term that we've been looking at in Hobbes and Locke? What is it that Rousseau thinks he has found that that eluded his predecessors? In many ways we have already seen, I've already suggested. Rousseau follows in the footsteps of his great predecessors, particularly Hobbes and Locke, in attempting to understand the original condition by referring to this hypothetical or conjectural state of nature. In many ways, he praises and follows Hobbes and Locke in doing this but suggests that they never really took the problem of nature seriously enough. What does it mean, Rousseau seems to ask us, to take nature, human nature, the state of nature, what does it mean to take nature seriously? Again to understand it, to understand human nature, what it originally is, is to conduct a sort of thought experiment where we peel away almost like the layers of an onion, everything that we have acquired through the influence of time, of history, of custom and tradition, in order to discover what is naturally there. So when Hobbes tells us or when he attributes to natural man certain warlike propensities, Rousseau figures that this cannot be right. War and the passions that give rise to war can only come into being once we are in society. The state of war is really simply the state of society. This cannot be told for natural man because the natural conditions had no social relationships of this sort and you might say his statement was ditto for John Locke. When Locke attributes to us in the state of nature certain qualities of rationality, of industry, of acquisitiveness. These too, Rousseau complains, are only qualities that we can acquire in the light of society. Property entails social relations between persons, relations of justice and injustice, and man in a state of nature is not a social animal. So it is clear for Rousseau that human nature is something infinitely more remote and strange than any of his predecessors had ever imagined. What was the condition of natural man? Rousseau's captivated readers, in his time and since, by showing that the original condition of human nature was far more like an animal than anything identifiably or recognizably human. Rousseau takes great delight in animalizing human nature, animalizing us. When Aristotle said that man is the rational being because we possess speech or logos Rousseau says "wrong again." Language is dependent upon society and could only have developed over literally thousands of generations and cannot be a property of natural man. Human nature is little different from animal nature and, in many ways, Rousseau delights in…you can see this in his footnotes in particular…in investigating stories about orangutans and other species that he believes, in many ways, are our distant ancestors. You might say, a century before Darwin, Rousseau could just have easily entitled his second discourse On the Origins of the Species. In many ways, the whole science of evolutionary biology is already, in many ways, implicit here and yet for all of our features, our common features with other species, Rousseau specifies two qualities that set us apart. The first is the quality of freedom or what he calls free agency although he understands this in a very specific way. Let me read a relevant or crucial passage. "In any animal," Rousseau writes, "in any animal I see nothing but an ingenious machine to which nature has given senses in order for it to renew its strength and to protect itself to a certain point from all that tends to disturb it. I am aware of precisely the same thing in the human machine." In other words, he says animals are just simply little machines that operate by mechanical or physical impulses and needs and desires and the same is true, he says, in the human machine with this difference, "that nature alone does everything in the operations of an animal whereas man contributes as a free agent to his own operations." What does he mean there in saying that man is a free agent? This idea of free agency in many ways sounds similar, and indeed it is similar, to Hobbes and Locke, both of whom who said freedom of will, some kind of freedom is a characteristic of natural man or natural pre social man. But Rousseau seems to add to this something different. Freedom for Hobbes or Locke simply means the freedom to choose to do this or that, the freedom to exercise the will and not to be interfered with by others around us. Rousseau also believes that but in many ways he adds something else to it. He connects freedom, in this same passage, to what he calls the phenomenon or the quality, the faculty of perfectibility, perfectibilité. What does he mean in connecting freedom with what he calls perfectibility? Perfectibility, for Rousseau, suggests an openness, a sort of virtually unlimited openness to change. We are the species who not only have the freedom to do this or that but we are the species who have the freedom, as it were, to become this or that. And it is our very openness to change that accounts for our mutability over time. As a species, in other words, we are you might say, uniquely undetermined, meaning that our nature is not confined in advance to what it may become; rather, our nature, for Rousseau, is uniquely suited to alter and transform itself as circumstances change and as we adapt and adopt to new and unforeseen situations. Perfectibility for Rousseau is not so much a feature of the individual as it is of the species. And again, whereas Hobbes or Locke assumed that human nature itself remained more or less constant in the transition from what they called the state of nature to the civil state, Rousseau believed that human nature has undergone manifold revolutions as he called it over the source of time. What we are at any one phase of human history or human evolution will be very, very different from what we are at any other particular phase. And it is this what he calls distinctive and unlimited faculty that he says is also the source of all of our misfortunes. So when he says that we are characterized by freedom and associates freedom with perfectibility, he doesn't necessarily mean by "perfectibility" that which perfects us. He also says it is that which is at the source of our miseries and our discontents. In many ways, if you wanted to give this book another title I've already suggested one for it, The Origin of the Species. It could just as well have been called more than a century before Freud, Civilization and its Discontents, which is many ways Freud's attempt to rewrite Rousseau's account of the evolution of the human species. But Rousseau notes, in this same part, that freedom or perfectibility is not our sole natural characteristic although it is responsible, in some way, for almost everything that we have become. Everything that we have become is due to this openness to change. In addition to perfectibility and freedom is the quality that Rousseau calls pitié, or pity, compassion, and here is, in a sense, Rousseau at his most characteristic. You could say, here is Rousseau, the founder of Romanticism. Man is not the rational animal, the thinking being, the being with logos, but we are the sensitive creature. Rousseau finds all kinds of evidence for assuming that compassion is part of our original nature. He notes, in other species, a reluctance to witness the pain or suffering of another of its own kind, how an animal will not wish to walk near a dead member of its own species. That seems to indicate to Rousseau, even in the other species, a kind of natural core of compassion or pity. The fact that we cry at the misfortunes of others who have nothing to do with us is evidence of our original sensitivity. Do we not enjoy crying in movies? Has someone in here ever cried in a movie? Yeah, I thought we all have. Even at people or objects that don't exist. Did we not feel pity for King Kong when we saw that movie? Did we not feel pity for a fictional creature that could not exist but yet whose fate somehow affected us in some way? And Rousseau completely understands this. In giving man tears, Rousseau writes, nature bears witness that she gave the human race the softest of hearts. Man is a sensitive creature, so much so that Rousseau finds evidence in this for what he believes is our natural goodness. The natural goodness of man in the state of nature is to some degree borne out by this quality of pity or compassion that we even share with other species. Why does Rousseau emphasize this quality? Because it is deeply important to him. You might say long before Dr. Phil and thousands of other self-help gurus and self-help manuals, Rousseau taught us to get in touch with our feelings. While natural man may be compassionate and kind, however, that sentiment, he tells us, is easily overpowered by more powerful passions once we enter society, once we become civilized or socialized. We cease, once we are in society, to care about others and we become calculating and mercenary in other motives. Selfishness and egoism are in fact reinforced for him by the development of reason. Reason, he writes, is what engenders egoism and reflection strengthens it. The development of rationality, he thinks, simply hastens our corruption by the assisting in the development of different vices and the task of the Second Discourse, at least its rhetorical task, is in many ways to recover our natural selves, compassionate, gentle, kind, from the artificial, corrupt and calculating selves we have all become in civil society. And who can't read that in Rousseau without realizing that there is a significant germ of truth in what he says? Did Rousseau believe it possible then or desirable to return to the state of nature, to return to some kind of prelapsarian condition before the beginnings of civil society? He is frequently read as saying this. When Voltaire wrote--read, rather, the Second Discourse, he said, "never has so much intelligence been expended in the attempt to turn us back into brutes" and that is clever but it's not really right. Voltaire surely knew that 150 years before Rousseau, there was a French writer by the name of Montaigne, Michel de Montaigne, who had written an important essay called On Cannibals in which he described Indian tribes off the coast of Brazil whom he praised against the true savagery and barbarism of their European conquers. When he calls that essay, that famous essay, Michel called--Montaigne calls it Of Cannibals, it is an open question of who the cannibals are. Are they the natives of the Brazilian coast or are they again the European conquerors? And Montaigne, like Rousseau but a century or more before, praised, in many ways, the qualities and the capacities of these sauvage that he discovered and contrasted them to the bloodthirsty cruelty of the Europeans of his own day. Rousseau was deeply influenced by this particular essay and it's a short essay and I would suggest at some point when you have a chance you read it. But in any case, Rousseau makes it plain that a return to the state of nature or some kind of pre-social or pre-civil state is no longer an option for civilized beings. In one of the footnotes, and I encourage you to read the footnote, very important footnotes in his book, Rousseau writes, "What then? Must we destroy societies, annihilate thine and mine and return to live in the forests with the bears? A conclusion," he writes, "in the style of my adversaries, which I prefer to anticipate rather than to leave to them the shame of drawing." And, he says, in other words no, we can't do that. A return to the state of nature is impossible for us for the same reason it would be like returning a domestic animal back to the wild. They and we have simply lost our instinct for self-preservation. It has been dulled by continual association and dependence on others. We would not last a single day. So if a return to nature is impossible, the only alternative in some way is to remain in society. But before we can learn how to live in society, Rousseau wants to tell the story of how it is man became civilized so to speak, how the transition from nature to culture or from nature to society in fact occurred. In one sense, Rousseau's account of this story can be given in a single word: Property. The first sentence of Part II of the Discourse reads as follows: "The first person who having enclosed a piece of land, took it into his head to say this is mine and found people simple enough to believe him was the true founder of civil society." Locke would certainly agree, in some respects, but Rousseau continues as follows. "What crimes, wars, murders, what miseries and horrors would the human race have been spared had someone pulled up the stakes or filled in the ditch and cried out to his fellow men do not listen to this impostor. You are lost if you forget that the fruits of the earth belong to all and the earth to no one." There you see, in a germ, in many ways, his repudiation of Locke. Rousseau was not a communist although this sounds very much in some respects like Karl Marx. He was not a communist. He did not feel it was either again possible or desirable to do away with private property or to collectivize property in the manner of a Plato or a Marx but there is no one of whom I am aware who is a more acute observer of the ills of class and the effects of private property than Rousseau. He believed that there was something deeply wrong with the conception of government as the protector of private property that intervenes as little as necessary with the affairs of individuals leaving them simply free to pursue life, liberty and estate as they would see fit. Rousseau, in many ways, points back to an older, you might say classical conception of government of the ancient polis and ancient republic, one for which politics had the task, among other things, of supervising the pursuit and acquisition of property, mitigating the harshest effects of economic inequalities. And a single sentence from Rousseau's first discourse, The Discourse on the Arts and Sciences, in many ways, says it all. "Ancient politicians," he wrote, "spoke only about morals and virtues. Ours speak only of commerce and money." That was Rousseau's complaint, no talk any longer of civic virtue and citizenship. Locke's view, recall from just a couple of days ago, is that the emancipation of acquisition makes everyone better. In Locke's famous formula, a day laborer in England is housed, clothed and fed better than a king of the Americas. And Rousseau believed that from a strictly, you might say, economic point of view there is certainly a lot of truth to this. But he also realized that from the economic point of view or that the economic point of view barely began to scratch the surface of things. Rousseau is far more impressed, you might say, by the proud dignity and independence of the native American king than with all the luxuries and conveniences that have made European kings and even some European day laborers soft and dependent, in his word. Rousseau was deeply impressed, and again you see this in his footnotes, with the kind of inassimilable character of native peoples, Icelanders, Greenlanders, Hottentots, he writes, all of their refusal to assimilate in many ways to European religion and custom. They prefer their "personal independence," he writes, "to the comforts and luxuries of modern civilization." Consider the following passage, which is one of the passages I love in this book that comes from his footnotes. He says, "savages have often been brought to Paris, London and other cities. People have been eager to display our luxury, our wealth and all our most useful and curious arts to them. None of this has ever excited in them anything but a stupid admiration without the least stirring of covetousness. I recall, among others, the story of a chief of some North Americans who was brought to the court of England about 30 years ago," he writes. "A thousand things were made to pass before his eyes in an attempt to give him some present that could please him but nothing was found about which he seemed to care. Our weapons seemed heavy and cumbersome to him. Our shoes hurt his feet. Our clothes restricted him. He rejected everything. Finally, it was noticed that having taken a wool blanket, he seemed to take some pleasure in wrapping it around his shoulders. ‘You will agree at least,' someone immediately said to him, ‘on the usefulness of this furnishing.' ‘Yes,' he replies, ‘this seems to be nearly as good as an animal skin.' However," Rousseau says, "he would not have said that had he worn both of them in the rain." And there's kind of Rousseau's sense of the virtue, again. The proud independence, of the native of the sauvage, as he calls him, to the decadence, the corruptness, the softness of the modern European. Rousseau's assertion that market economies and the governments that protect them do in a sense make all people better off and yet despite this fact he realized that market economies also introduced vast inequalities between human beings and this is a trade off that Rousseau seemed unwilling to make or wished to make even though I would have to say most Americans seem fairly happy with this trade off perhaps because we have--we either are or have become kind of natural Lockeans but that, again, is a bargain or a tradeoff that Rousseau was at least unwilling to accept. He was not only impressed with what was gained by the progress of civilization but more impressed, so to speak, by what was lost. Its inequalities increase, we are forced to become greedy, calculating, acquisitive, again our natural pity or compassion is easily overcome by these more powerful passions. What becomes of our original goodness and natural decency? Natural man, for Rousseau, thought of himself but only of himself whereas civilized man is forced to think of others but we do only in a calculating and mercenary way, thinking of them as means to our own ends. Even the social bond itself, even the social contract, is simply an agreement among business partners, so to speak, the most bourgeois of institutions, the contract. The fact is Rousseau believes under modern conditions natural man is transformed into a bourgeois. Rousseau is one of the first to use that term in quite that way. Locke's rational and industrious man was, for Rousseau, simply the calculating bourgeois and unlike the natural man, who thinks only of himself, or of the classical citizen of Rome or Sparta, who thinks only of the common good and his public duties, the bourgeois inhabits a kind of halfway house, neither capable of original or spontaneous goodness nor of political heroism or self-sacrifice. In short, our modern people have become kind of nullities, nowhere men, nowhere man, you might say, in the title of the Beatles' song. How did that happen? How did we become put in this situation? I have suggested one answer or Rousseau suggests one answer, the development of property. But that's only part of the story and what I want to do for next week is tell another very important, in some respects even more important, part of that story of how we have become the way we are. So anyway, have a good weekend. Go to the football game and we're going to win against Princeton.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
12_The_Sovereign_State_Hobbes_Leviathan.txt
Professor Steven Smith: O.K., today, what a joy. What a joy! We start Hobbes. And he is one of the great treats. Thomas Hobbes was the author of the first and, I believe, undoubtedly the greatest, work of political theory written in the English language. He was a master of English style and prose, and his work ranks among the very greatest in this or any other language. Leviathan is to prose what Milton's Paradise Lost is to epic poetry. Think about that. Hobbes was in many ways a perfect foil for Machiavelli. He played the part of Doctor Watson to Machiavelli's Sherlock Holmes. Hobbes, in other words, carried out what Machiavelli had helped him make possible. Machiavelli, you remember, claimed to have discovered a new continent, new modes and orders. It was Hobbes who helped to make this new continent habitable. Machiavelli, you might say, cleared the brush. He was the Lewis and Clarke or the Columbus. Hobbes built the houses and institutions. Hobbes provided us with the definitive language in which even today we continue to speak about the modern state. However, and this is what I want to emphasize throughout our reading of Hobbes, he has always been something of a paradox to his readers. On the one hand, you will find Hobbes the most articulate defender of political absolutism. Hobbes in the Hobbesian doctrine of sovereignty, or the Hobbesian sovereign, to have a complete monopoly of power within his given territory. In fact, the famous frontispiece of the book, which is reproduced in your edition, although it is not altogether very clear. It is not a very good reproduction, the famous frontispiece to the original 1651 edition of Leviathan depicts the Leviathan, depicts the state, the sovereign, holding a sword in one hand and the scepter in the other, and the various institutions of the civilian and churchly ecclesiastical authority on each side. The sovereign holds total power over all the institutions of civilian and ecclesiastical life, holding sway over a kind of peaceable kingdom. Add to this, to the doctrine of indivisible sovereign power, Hobbes' insistence that the sovereign exercise complete control over the churches, over the university curricula, and over what books and opinions can be read and taught. He seems to be the perfect model of absolutism and of absolute government. You have to consider also the following. Hobbes insists on the fundamental equality of human beings, who he says are endowed with certain natural and inalienable rights. He maintains the state is a product of a covenant or a compact, a contract of a sort, between individuals, and that the sovereign owes his authority to the will or the consent of those he governs, and finally that the sovereign is authorized only to protect the interests of the governed by maintaining civil peace and security. From this point of view, it would seem that Hobbes helps to establish the language of what we might think of as the liberal opposition to absolutism. And this paradox was noted even in Hobbes' own time. Was he a defender of royalism and the power of the king, or was he a defender or an opponent of royalism? I mean, in many ways, to be sure, Hobbes was a product of his time, and what else could he be? But Hobbes lived at a time when the modern system of European states, even as we understand them today, was just beginning to emerge. Three years before the publication of Leviathan, 1651, the signing of the Treaty of Westphalia, famous peace treaty, brought an end to more than a century of religious war that had been ignited by the Protestant Reformation. The Treaty of Westphalia officially put an end to the 30 Years War, but more than that it ratified two decisive features that would be given powerful expression by Hobbes. First, the Treaty declared that the individual sovereign state would henceforth become the highest level of authority; you might say, putting an end once and for all to the universalist claims of the Holy Roman Empire. Each state was to be sovereign and to have its own authority. And secondly, that the head of each state would have the right to determine the religion of the state, again thus putting an end to the claims of a single universalist church. This is what the Treaty of Westphalia put into practice and, among other things, what Hobbes attempted to express in theory in his book: the autonomy and authority of the sovereign and the sovereign's power to establish what religious doctrine or what, even more broadly, what opinions are to be taught and held within a community, within a state. Who was Hobbes? Let me say a word about him. Hobbes was born in 1588, the year that the English naval forces drove back the invasion of the famous Spanish Armada. He grew up in the waning years, the last years, of the Elizabethan era, and he was a boy when Shakespeare's most famous plays were first performed. Hobbes, like many of you, was a gifted student, and he went to college. His father, who was a local pastor from the southwest of England, sent him to Oxford, although he went at the age of 14. And after he graduated, he entered the service of an aristocratic family, the Cavandish family, where he became a private tutor to their son. His first book was a translation of Thucydides' History of the Peloponnesian War, which he completed in 1629; Thucydides, the great historian of the Peloponnesian War, who we mentioned before when we talked about Plato. Hobbes was a gifted classical scholar. He spent a considerable amount of time on the European continent with his young tutee, Mr. Cavandish. And while he spent time in Europe, he met Galileo and Rene Descartes. It was during the 1640s, the period that initiated the great civil wars in England, and the execution of the king, Charles I, that Hobbes left England to live in France, while the fighting went on. He left England with many of the royal families, the aristocratic families, who were threatened by the republican armies organized by Cromwell and that had executed the King. In fact, the three justices, the three judges, who were in charge of the judicial trial of Charles I, King Charles, the one who lost his head, those three judges later found a home where? In New Haven. They came to New Haven, the three judges, Judge Whaley, Goff, and Dixwell. Does that sound familiar? Yes. New Haven was in part started by, founded by, members of the, you might say, the republican opposition to royalty and to the English king. And any way, Hobbes, however, was deeply distressed by the outbreak of war, and he spent a great deal of time reflecting on the causes of war and political disorder. His first treatise, a book called De Cive, or De Cive, depending on how you pronounce it, On the Citizen, was published in 1642, and it was a kind of draft version of Leviathan that was published almost a decade later, again in 1651. Hobbes returned to England the same year of the book's publication, and spent most of the rest of his long life, Leviathan was written well into his 60s. He was 63 when it was published. He spent the rest of his long life working on scientific and political problems. He wrote a history of the English Civil Wars, called Behemoth, which remains a classic of the analysis of the causes of social conflict. And as if this were not enough, near the very end of his life, he returned to his classical studies translating all of Homer's Iliad and Odyssey. He died in 1679 at the age of 91. From the various portraits and descriptions of Hobbes, we can tell he was a man of considerable charm, and I wish that in the book we had had his picture, a reproduction of his portrait, on it. But I just want to read one brief passage from his biographer, a man named John Aubrey, who knew him. It was written during Hobbes' lifetime. Aubrey wrote about Hobbes: "He had a good eye and that of hazel color, which was full of life and spirit, even to the last. When he was earnest in discourse, these shone, as it were, a bright- as if a bright live coal within it. He had two kinds of looks. When he laughed, was witty, in a merry humor, one could scarce sees his eyes, and by and by, when he was serious and positive, he opened his eyes round. He was six foot high and something better." So that was very tall in the seventeenth century. "He was six foot high and very better. He had read much, if one considers his long life, but his contemplation was much more than his reading. He was want to say that if he had read as much as other men, he should have known no more than other men." So his point was he had read a lot, but what was most important was his thinking. If he had read as much, he would know as little. Gives you a little sense of Hobbes' spirit, his humor, the wry wit that becomes apparent on almost every page of this book, but you have to be a careful reader. Hobbes was deeply controversial, as you might suspect, during his lifetime. Leviathan was excoriated by almost every reader of the text. To the churchmen, he was a godless atheist. To the republicans, he was tainted with monarchy, or monarchism. And to the monarchists, he was a dangerous skeptic and free thinker. Hobbes, again, along with Machiavelli, was one of the great architects of the modern state. And to some degree, he even seems to speak, he seems even more characteristically modern than Machiavelli. I mean, consider just some of the following. Machiavelli speaks of the prince, while Hobbes speaks of the sovereign, that is a kind of impersonal or in Hobbes' language, artificial power created out of a contract. Hobbes' method seems scientific. It seems formal and analytical in contrast to Machiavelli's combination of historical commentary and reflection drawn from personal experience. While Hobbes, excuse me, while Machiavelli often spoke of the sublime cruelty of men like Scipio and Hannibal, Hobbes speaks the more pedestrian language, the language of power-politics, where the goal is not glory and honor, but self-preservation. And Machiavelli's emphasis upon arms is considerably attenuated by Hobbes' emphasis on laws. Hobbes, in other words, tried to render acceptable, tried to render palatable, what Machiavelli had done by providing a more precise and more legal and institutional framework for the modern state. So let's think a little bit about what it was that Hobbes was attempting to accomplish. Hobbes, like Machiavelli, was an innovator, and he was self-consciously aware of his innovations. And like Machiavelli, who said in the fifteenth chapter of The Prince that he would be the first to examine the effectual truth of things, as opposed to the imaginings of them, Hobbes wrote that civil science, that is what he called political science, civil science, was no older than my book De Cive. Modern political science, he said, began with this book of 1642. What did he think of as his novelty? What was new? What was revolutionary about, or innovative, about Hobbes' political science? Hobbes clearly saw himself, in many respects, as founding a political science modeled along that of the early founders of the scientific revolution. Galileo, I have already indicated that Hobbes had met, William Harvey, Rene Descartes; a handful of others who were part of what we think of as the modern scientific revolutionaries. And like these other revolutionaries who had overthrown, you might say, the Aristotelian paradigm in natural science, Hobbes set out to undermine the authority of Aristotle in civil science, in political and moral science. Hobbes set himself up as the great anti-Aristotle, the great opposition to Aristotle. Consider just the following passage from Leviathan with one of my favorite titles from the book, a chapter called "Of Darkness from Vain Philosophy and Fabulous Traditions." In that chapter, chapter 46, Hobbes writes: "There is nothing so absurd that the old philosophers have not some of them maintained. And I believe that scarce anything could be more absurdly said in natural philosophy than that which is now called Aristotle's Metaphysics, nor more repugnant to government than much that he had said in his Politics, nor more ignorantly than a great part of his Ethics." So there, you see Hobbes laying down a challenge. What was it that he claimed to find so absurd, repugnant and ignorant in Aristotle? Why did he--what did he--what was he trying to un-throne, dethrone in Aristotle? Hobbes is typically concerned with the foundations of this new science, getting the building blocks right from the beginning. The opening chapters of Leviathan, which I have only assigned a few, but the opening chapters present a kind of political physics where human beings are reduced to body and the body is further reduced to so much matter and motion. Human beings can be reduced to their movable parts much like a machine. "What is life?" he asks, rhetorically in the introduction. "What is life but a motion of the limbs? What is the heart but a spring, or reason but a means of calculating pleasures and pains." He sets out to give a deliberately and thoroughly materialistic and non-teleological physics of human nature. In fact, a French disciple of Hobbes in the next century, a man named La Mettrie, wrote a treatise very much following in the lines of Hobbes called L'Homme Machine, or literally, Man a Machine. This is the way Hobbes' new science of politics appears to begin, and that new beginning is intended to offer in many ways a comprehensive alternative to Aristotle's physics, or Aristotle's politics. Aristotle, remember, argues that all action is goal-directed, is goal-oriented. All actions aim at preservation or change, at making something better or preventing it from becoming worse. Hobbes believed, on the other hand, that the overriding human fact, the overriding motivation of human behavior, is largely negative, not the desire to do good, but the desire to avoid some evil. Aristotle, for Hobbes, had simply seen the world through the wrong end of the telescope. For Aristotle, human beings have a goal or a telos, which is to live a life in community with others for the sake of human flourishing. But for Hobbes, we enter into society not in order to fulfill or perfect our rational nature, but rather to avoid the greatest evil, namely death or fear of death, at the hands of others. Politics, for him, is less a matter of prudential decisions of better and worse, than it is, you might say, an existential decision of choosing life or death. For Hobbes, in many ways, as for Machiavelli, it is the extreme situation of life and death, of chaos and war, that come to serve as the norm for politics and political decision-making, fundamental alternative or challenge to Aristotle. And furthermore, Hobbes not only criticized, you might say, the foundations, the motivational and psychological foundations, of Aristotle's theory of politics and human nature, he blamed the influence of Aristotle for much of the civil conflict of his age. Aristotle, who was increasingly being embraced by civic republicans in England of his time had been brought up, according to Hobbes, on Aristotle's teaching that man is by nature a political animal. This was, again, the thesis of the classical republicans according to which we are only fully human, or we only become fully human, when we are engaged in political life, in ruling ourselves by laws of our own making. This was a doctrine that Hobbes attributes to many of the teaching, much of the teaching at the universities of his age. And it is precisely this desire to be self-governing, you might say to rule directly, to have a direct part in political rule, that Hobbes saw as one of the great root causes of civil war. And his answer to Aristotle and to the classical republicans of his age, was his famous doctrine of what we might call "indirect government," or what perhaps would be more familiar to us by the term "representative government." The sovereign is not, for Hobbes, the people or some faction of the people ruling directly in their collective capacity. The sovereign is, for Hobbes, the artificially reconstructed will of the people in the person of their representative. The sovereign representative acts, you might say, like a filter for the wills and passions of the people. The sovereign is not the direct expression of my will or your will, but rather an abstraction from my natural desire to rule myself. In other words, instead of seeking to participate directly in political rule, Hobbes wants us to abstain from politics by agreeing to be ruled by this artificial man, as he calls it, this artificial person or representative that he gives the name "the sovereign." "For by art", he says in the introduction, "For by art is created that great Leviathan called a commonwealth or a state, which is but an artificial man, though of greater stature and strength than the natural for whose protection and defense it was intended." The sovereign, he says, or Leviathan, this great artificial man, the sovereign is something more like what we would call today an office, rather than a person, as when we speak of the executive as an office. And it is simply the person who inhabits the office, although that might be somewhat questionable in some of our recent executive decisions. But for Hobbes, Hobbes creates this office of a political called the sovereign. Now, his language in that sentence that I just read from the introduction, "For by art", again, "is created that great Leviathan called a commonwealth or a state." When Hobbes uses the term "art" there, "For by art is created," that term is deeply revealing of his purpose. Again, for Aristotle, by contrast, art presupposes nature. Or in other words, nature precedes art. Nature supplies the standards, the materials, the models, for all the later arts, the city being by nature, man by nature, nature provides the standard. Nature precedes art and human artifice or human making. But for Hobbes, think of this by contrast, art does not so much imitate nature, rather art can create a new kind of nature, an artificial nature, an artificial person, as it were. Through art, again, is created the great Leviathan. Through art properly understood and by "art," of course, I mean something like human making, human ingenuity, human artfulness, through art we can begin not just to imitate, but we can transform nature, make it into something of our own choosing. "Art" here is not to be understood also as the antithesis of science, as when we speak of the arts and the sciences. Rather, science is the highest form of art. Science is the highest kind of human making. Science, or what Hobbes simply calls by the name "reason," is simply the fullest expression of human artfulness. "Reason," he says in chapter 5, "reason is not a sense and memory born with us, reason is not born with us, nor gotten by experience only," he says, "but is attained by industry, first in the act imposing of names and secondly, by getting a good and orderly method." Think of those terms. "Reason," and again, he uses this synonymously with other terms, like science or art, is not simply born with us. It is not simply a genetic endowment, nor is it simply the product of experience, which Hobbes calls by the name "prudence." But rather reason, he says, is attained by industry, by work, and it is developed first, he says, by the imposing of names on things, the correct names on things, and second by getting a good and orderly method of study. Reason consists in the imposition of a method for the conquest of nature. By science, Hobbes tells us, he means the knowledge of consequences, and especially, he goes on to say, "when we see how anything comes about, upon what causes and by what manner, when like causes come into our power, we can see how to make it produce like effect." We can see how to make it produce like effects. Reason, science, art is the capacity to transform nature by making it, imposing on it, a method that will produce like effects after similar consequences. There is, in other words, a kind of a radically transformative view of reason and knowledge and science, political science, civil science, running throughout Hobbes' work. Reason is not about simple observation, but rather, it is about making, production, or as he says, "making like consequences produce the desired effects." We can have a science of politics, Hobbes believes. We can have a civil science, because politics is a matter of human making, of human doing, of human goings on. We can know the political world. We can create a science of politics because we make it. It is something constructed by us. Hobbes' goal here, as it were, is to liberate knowledge, to liberate science from subservience or dependence upon nature or by chance, by fortuna, by turning science into a tool for remaking nature to fit our needs, to impose our needs or satisfy our needs through our science. Art, and especially the political art, is a matter of reordering nature, even human nature, first according to Hobbes, by resolving it into its most elementary units, and then by reconstructing it so that it will produce the desired results, much like a physicist in a laboratory might. This is Hobbes' answer to Machiavelli's famous call in chapter 25 to master fortuna, to master chance or luck, fortune. But you might say, Hobbes goes further than Machiavelli. Machiavelli said in that famous chapter 25, that the prince, if he is lucky, will master fortuna about half the time, only about 50% of the time. The rest of human action, the rest of statecraft, will be really left to chance, luck, contingency, circumstances. Hobbes believes that armed with the proper method, with the proper art, or scientific doctrine, that we might eventually become the masters and possessors of nature. And I use that term "masters and possessors of nature," a term not of Hobbes' making, but of Descartes from the sixth part of the Discourse on Method, because I think it perfectly expresses Hobbes' aspirations, not only to create a science of politics, but to create a kind of immortal commonwealth, which is based on science and therefore based on the proper civil science, and therefore will be impervious to fluctuation, decay, and war and conflict, which all other previous societies have experienced. You can begin to see, in other words, in Hobbes' brief introduction to his book, as well as the opening chapters, you can really see the immensely transformative and really revolutionary spirit underlying this amazing, amazing book. So where do we go from here? We turn from methodology and science to politics. What is Hobbes' great question? What was important when reading, starting out with a new book, asking yourself, what question is the author trying to answer? What is the question? And it is not always easy to answer, because sometimes they do not always make their deepest or most fundamental questions altogether clear. In the case of Leviathan, I would suggest to you, Hobbes' central question is, what makes authority possible? What is the source of authority? And you might say, what renders it legitimate? Maybe the question is, what makes legitimate authority possible? This is still a huge question for us when we think about nation building and building new states, how to create a legitimate authority. Obviously, there is a tremendous issue with this in Iraq today. People there and here struggle with what would constitute a legitimate authority. Perhaps we should airlift copies of Leviathan to them, because that is the issue that Hobbes is fundamentally concerned with. His question goes further. How can individuals who are biologically autonomous, who judge and see matters very differently from one another, who can never be sure whether they trust one another, how can such individuals accept a common authority? And, again, that is not just what constitutes authority, but what makes authority legitimate. That remains not only the fundamental question for Hobbes, but for the entire, at least for the entire social contract tradition that he helped to establish. You might say, of course the question, what renders authority legitimate, is only possible, or is only raised when authority is in question. That is to say, when the rules governing authority have broken down in times of crisis, and that was certainly true in Hobbes' time, a time of civil war and crisis. What renders authority legitimate or respectable? And to answer that question, Hobbes tells a story. He tells a story about something he calls "the state of nature," a term he did not invent, but with which his name will always and forever be associated, the idea of the state of nature. "The state of nature" is not a gift of grace or a state of grace from which we have fallen, as in the biblical account of Eden, nor is the state of nature a political condition, as maintained in some sense by Aristotle, when he says the polis is by nature. The state of nature for Hobbes is a condition of conflict and war. And by a "state of nature" he means, or by a state of war, he means a condition where there is no recognized authority in his language to keep us in awe, no authority to awe us. Such a condition, a state of war, may mean a condition of open warfare, but not necessarily. It can signify battle, but Hobbes says it can also signify the will to contend, simply the desire or the will to engage in conflict, renders something like a state of nature. A state of war can include, in other words, what we might call a "cold war," two hostile sides looking at each other across a barrier of some type, not clear or not certain what the other will do. So the state of nature is not necessarily a condition of actual fighting, but what he calls a "known disposition to fight." If you are known or believed to be willing to fight, you are in a state of war. It is a condition for Hobbes of maximum insecurity where in his famous formula "life is solitary, poor, nasty, brutish, and short." Perhaps he should have said fortunately short. This is the natural condition, the state of nature, the state of war that Hobbes attributes to, again, the fundamental fact of human nature. Now, his claim that the state of nature is the condition that we are naturally--the state of war, rather, is a condition that we are naturally in, is to say, among other things, that nature does not unite us in peace, in harmony, in friendship, or in solidarity. If nature is a norm, it does not, again, mandate or incline us to peace, friendship and solidarity with others. Only human art or science or art, human contrivance, can bring about peace. Conflict and war are primary. Peace is derivative. In other words, for Hobbes, authority and relations of authority do not arise naturally among us, but are rather, again, like civil science itself, the product of contrivance or art. So the question for us remains, which deeply challenged readers in Hobbes' own time, what makes Hobbes' story, as I am calling it, his story about the state of nature being a condition of war, what makes it plausible? What makes it believable as an account of, again, the condition we are naturally in? Why should we believe Hobbes' story and not some other story? I just want to say a word about that before closing. From one point of view, reading Hobbes, his account of the state of nature seems to derive from his physics of motion and rest, in the opening chapters of Leviathan. He begins the work, you remember, with an account of human nature, account of human psychology, as a product of sense and experience. We are bodies in motion, and who cannot help but obey the law or the physics of attraction and repulsion. We are bodies in constant motion. He seems, in other words, to have a kind of materialistic psychology in which human behavior exhibits the same, as it were, mechanical tendencies as billiard balls that can be understood as obeying, again, geometric or causal processes of cause and effect. Right? The state of nature is not seen by him as an actual historical condition in some ways, although he occasionally will refer to what we might think of as anthropological evidence to support his views on the state of nature. But the state of nature, for him, is rather a kind of thought experiment after the manner of experimental science. It is a kind of thought experiment. It consists of taking human beings who are members of families, of estates, of kingdoms, and so on, dissolving these social relations into their fundamental units, namely the abstract individuals, and then imagining, again, in the manner of a chemist or a physicist, how these basic units would hypothetically interact with one another, again almost like the properties of chemical substances in some ways. How would we behave in this kind of thought experiment? That would be one way of reading that Hobbes seems to, wants us to think about the state of nature as akin to a scientific experiment. Hobbes is the, again, the great founder of what we might call, among others, is the experimental method in social and political science. And there is a reason, perhaps a reason for this, too. And I will end just on this note. When Hobbes was a young man, he worked as a private secretary for a short time, a private secretary to another very famous Englishman by the name of Francis Bacon, the great founder of what we think of as the experimental method, the method of trial and error, of experience and experiment, and arguably Hobbes was influenced in many ways by Bacon's own philosophy of experience and experiment. And Hobbes took Bacon's method in some ways applying it to politics, tried to imagine, again, the natural condition of human beings, and what we are by nature, by a process of abstraction, and abstracting all of the relations and properties that we have acquired over history, through custom, through experience, stripping those away like the layers of an onion, and putting us almost, as it were, in an experimental test tube or under a microscope, seeing how we would under those conditions react and behave with one another. I will leave it at that, although I will start next week by showing how that view of Hobbes is only at best partially correct. So anyway, have a wonderful weekend with your parents here, and I will see you next week.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
4_Philosophers_and_Kings_Platos_Republic_III.txt
Professor Steven Smith: There is one person in here, I don't know who it is, and you will not know who it is yet, but there is one person in here for whom the reading of Plato's Republic will be the most important intellectual experience you have at Yale. It is a book that one of you will go back to time and time again and it will stick with you forever. What I would like you to do is to remember this and four years from now, when most of you are ready to graduate, if that one person in here would email me and let me know who it is, okay? Maybe it will be you. Maybe? Possibly. Or you. Okay. This is the book that started it all. The Apology, the Crito, these are warm-ups to the big theme, to the big book, the Republic. Every other book in this political science that has since been written, beginning with Aristotle's Politics and moving on to the present day is, in one way or another, an answer, a response to Plato's Republic. It started the whole thing. The first and most obvious thing to say about the Republic is that it is a long book. Not the longest book you will ever read, but long enough. In fact, in part, because of this, we are only reading approximately half the book, the first five books, to be more specific. The first five books that deal with and culminate in the best city, Plato's ideal city, what he calls Kallipolis, the just city, the beautiful city, ruled by philosopher-kings. The second half of the book turns in somewhat different, certainly equally important directions, but would take us much more time than the time we have allotted to deal with. So you will read that on your own. You can take another course, what have you. The Republic is a very perplexing book, you will find out. Its meaning will not be evident to you on a first reading. It may not be clear to you on a tenth reading, unless you approach it with the proper questions and the proper frame of mind. So let's start by asking a simple question. What is the Republic about? What does this book deal with? This is a question that has perplexed and divided readers of Plato almost from the beginning. Is it a book about justice, as the subtitle of the book suggests? Is it a book about what we today might call moral psychology and the right ordering of the human soul, which is a prominent theme addressed in this work? Is it a book about the power of poetry and myth, what we would call the whole domain of culture to shape souls and to shape our societies? Or is it a book about metaphysics and the ultimate structure of being, as certainly many of the later books of the Republic suggest? The theory of the forms, the image of the divided line and so on and so on. Of course, it is about all of these things and several others as well. But at least at the beginning, when we approach the book, we should stay on its surface, not dig at least initially too deeply. As one of the great readers of Plato of the last century once said, "Only the surface of things reveals the essence of things." The surface of the Republic reveals that it is a dialogue. It is a conversation. We should approach the book, in other words, not as we might a treatise, but as we might approach a work of literature or drama. It is a work comparable in scope to other literary masterworks--Hamlet, Don Quixote, War and Peace, others you might think of. As a conversation, as a dialogue, it is something the author wants us to join, to take part in. We are invited to be not merely passive onlookers of this conversation, but active participants in that dialogue that takes place in this book over the course of a single evening. Perhaps the best way to read this book is to read it aloud, as you might with a play, to yourself or with your friends. Let's go a little further. The Republic is also a utopia, a word that Plato does not use, was not coined until many, many centuries later by Sir Thomas More. But Plato's book is a utopia. It is a kind of extreme. He presents an extreme vision of politics. He presents an extreme vision of the polis. The guiding thread of the book is the correspondence, and we will look at this in some length and you will discuss it in your sections, no doubt. The guiding thread of the book is the correspondence, the symmetry between the parts of the city and the parts of the soul. Discord within the city, just as discord within the soul, is regarded as the greatest evil. The aim of the Republic is to establish a harmonious city, based on a conception of justice that, so to speak, harmonizes the individual and society, how to achieve that. The best city would necessarily be one that seeks to produce the best or highest type of individual. Plato's famous answer to this is that this city--any city--will never be free of conflict, will never be free of factional strife until, in his famous formula, kings become philosophers and philosophers become kings. The Republic asks us to consider seriously, what would a city look like ruled by philosophers? In this respect, it would seem to be the sort of perfect bookend to the Apology. Remember, the Apology viewed the dangers posed to philosophy and the philosopher and the philosophical life from the city. The Republic asks us, what would a city be like if it were ruled by Socrates or someone like him? What would it be like for philosophers to rule? Such a city would require, so Socrates tells us throughout the opening books, the severe censorship of poetry and theology, the abolition of private property and the family, at least among the guardian class of the city, and the use of selected lies and myths, what would today probably be called ideology or propaganda, as tools of political rule. It would seem that far from utopia, the Republic represents a radical dystopia, a satire, in some sense, of the best polity. In fact, much of modern political science is directed against Plato's legacy. The modern state, as we have come to understand it, is based upon the separation of civil society from governing authority. The entire domain of what we call private life separated from the state. But Plato's Republic recognizes no such separation or no such independence for a private sphere. For this reason, Plato has often been cast as a kind of harbinger of the modern totalitarian state. A famous professor at a distant university was said to have begun his lectures on the Republic by saying, "Now we will consider Plato, the fascist." This was, in fact, the view popularized by one of the most influential books about Plato written in the last century, a book written by a Viennese émigré by the name of Karl Popper, who in the very early 1950s, right at the height of the Cold War and of course the end of the conclusion of the Second World War, wrote a book called The Open Society and Its Enemies. He wanted to know what were the causes or who was responsible for the experiences of totalitarianism, both in Stalin's Russia and in Hitler's Germany. In the course of this inquiry, he concluded that not only Hegel and Marx were important in that particular genealogy, but this went back to Plato as well, Plato principally. Plato, who Popper accuses in a passionate, albeit not very well written book, accuses Plato of being the first to establish a kind of totalitarian dictatorship. Is that true? Plato's Republic is, we will discover as you read, a republic of a very special kind. It is not a regime like ours devoted to maximizing individual liberties, but it is one that puts the education of its citizens, the education of its members, as its highest duty. The Republic, like the Greek polis, was a kind of tutelary association. Its principal good, its principal goal, was the education of citizens for positions of public leadership and high political responsibilities. It is always worthwhile to remember that Plato was, above all, a teacher. He was the founder of the first university, the Academy, the Platonic Academy, where we will find out later Aristotle came to study, among many others--Aristotle being but the most famous. Plato was the founder of this school. This, in turn, spawned other philosophical schools throughout the Greek world and later, the Roman world. With the demise of Rome, in the early Christian centuries, these philosophical academies, these philosophical schools, were absorbed into the medieval monasteries. These, in turn, became the basis of the first European universities in places like Bologna, Paris, Oxford. These were, in turn, later transplanted to the New World and established in towns like Cambridge and, of course, New Haven. We can say today that this university is a direct ancestor of the platonic republic of Plato's Academy. We are all here the heirs of Plato. Think of that. Without Plato, no Yale. We would not be here today. I think that is a fact. Just ponder that for a moment. In fact, let me even say a little more about this. The institutional and educational requirements of Plato's Republic share many features with a place like Yale. For example, in both the Platonic Kallipolis, the just city, as well as this place, men and women--men and women--are selected at a relatively early age because of their capacities for leadership, for courage, for self-discipline, and responsibility. They spend several years living together, eating together in common mess halls, exercising together, and studying together, of course, far from the oversight of their parents. The best of them are winnowed out to pursue further study and eventually assume positions of public leadership and responsibility. Throughout all of this, they are subjected to a course of rigorous study and physical training that will lead them to adopt prominent positions in the military and other branches of public service. Does this sound at all familiar to you? It should. Let me put it another way. If Plato is a fascist, what does that make you? Plato, of course, is an extremist. He pushes his ideas to their most radical conclusions. That's what it is to be a philosopher. But he is also defining a kind of school. He regards the Politea or the Republic, because that is the original Greek title of the book, Politea or regime. He regards the politea as a school whose chief goal is preparation for guidance and leadership of a community. If you don't believe me about this, maybe you will consider the words of Jean-Jacques Rousseau, one of the great readers of Plato's Republic. Rousseau wrote in his Emile, "To get a good idea of public education," he says, "read Plato's Republic. It is not a political treatise, as those who merely judge books by their title think, but it is the finest, most beautiful work on education ever written." Rousseau. So, there we go. Let's now peek into the book itself. Just peek. We won't go too far. Let's start with the first line. Who remembers what the first line is? Oh, come on. You should know this. You're looking at the book. You're cheating. "I went down to the Piraeus." I went down to the Piraeus. Why does Plato begin with this line? There's a story that I heard. I'm not sure if it's altogether true, but it's a good story, at least, about the famous German philosopher Martin Heidegger, who said that on his first teaching of the Republic, he went through the whole book, taught the whole book in one seminar, one semester. The last time he taught it, the final time he taught it, he never got beyond the first sentence, "I went down to the Piraeus." What does it mean? Why does he begin with this? "I went down," a going down. The Greek word for this is catabasis. "I had made a descent." There is a book by a famous contemporary to Plato. It's a man named Xenophon, who wrote a book called the Anabasis. The anabasis means a going up, an ascent. But Plato begins this dialogue with this stigma. "I went down." The descent to the Piraeus. It is clearly modeled on Odysseus' descent to Hades in the Odyssey. In fact, the work is a kind of philosophical odyssey that both imitates Homer, but also anticipates other great odysseys of the human mind, works by those like Cervantes or Joyce. The book is full, you will see, of a number of descents and ascents. The most famous climb upward, although we will not actually read these parts for this class, concerns the climb to the divided line, the famous image of the divided line in Book VI, and the ascent to the world of the imperishable forms. Then, in the last book of the Republic, Book X, there is, once again, a descent to the underworld, to the world of Hades. The work is not, in a sense, written simply as a sort of timeless philosophical treatise, but as a dramatic dialogue with a setting, a cast of characters and a firm location in time and place. Let's say a little more about that time and place already indicated in the sentence, "I went down to the Piraeus." Plato was born in 427, which is four years after the commencement of the Peloponnesian War. He was a young man of 23 when the democracy in Athens was defeated. He was only 28 when the restored democracy executed his friend and teacher, Socrates, in 399. Almost immediately after the trial of Socrates, Plato left Athens and traveled extensively throughout the Greek world. Upon his return, he established this school at Athens he called the Academy, for the training of philosophers, statesmen, and legislators. Plato lived a long time. He lived until the age of 80. Except for two expeditions to Sicily, where he went at the request of Dionysius to help try to establish a philosophical kingship in Syracuse, he remained in Athens teaching and writing. The Republic belongs to that period of Plato's work after his return to Athens, after the execution of Socrates. The dominant feature of Plato's political theory, David Grene, a great reader of Plato, has said is "the root and branch character of the change it advocates and existing institutions." Plato's desire for a kind of radical makeover, of Athenian and Greek political institutions and cultures, grew out of his experience of political defeat and despair. The utopianism of the book is, in many ways, the reverse side of the sense of profound disillusionment that he felt at the actual experience of the Athenian polis. This was not only true of his experience at home, but of his failed efforts to turn Dionysius' kingship in Sicily into a successful example of philosophical rule. In fact, we have--and I want to read to you in just a moment--a lengthy transcript of Plato's own account of why he came to write the Republic. One thing, of course, you note in the Republic, is that Plato is nowhere present. He is not a participant in his own dialogue. He is the author, but not the participant. We don't know precisely what Plato thought, but we are helped, at least, by a kind of intellectual autobiography that he wrote and that we still have, in what is conventionally referred to as The Seventh Letter. Plato wrote a series of letters that we have. People have argued over the authenticity of them, although I think by now it is established that they are his. In the most famous of these letters, the lengthy seventh one, he gives us, again, something of an autobiography and tells us a little bit about why he came to write this book. Isn't this amazing that 2,000-2,500 years ago, we still have the letters of the man who wrote this book? Let me read to you what Plato says about how he came to write this book. "When I was a young man," he said--and this is written as he is very old. "When I was a young man, I felt as yet many young men do. I felt at the very moment I attained my majority I should engage in public affairs. And there came my way an opportunity that I want to tell you about. The democratic constitution, then loudly decried by many people, was abolished. And the leaders of the revolution set themselves up as a government of 30 men with supreme authority." He's referring to the Tyranny of the Thirty that existed after the Athenian defeat. "Some of these men , you must understand, were relatives of mine and well known to me. And what is more, they actually invited me at once to join them, as though politics and I were a fit match. I was very young then and it is not surprising that I felt as I did. I thought that the city was then living a kind of life which was unjust and that they would bend it to a just one and so administer it more justly. So I eagerly watched to see what they would do. And you must know, as I looked on, I saw those men in a short time make the former democracy look like a golden age." He is referring to his relatives, men like Critias and Charmides, who turned Athenian politics into a tyranny and, which he says, makes the "democracy look like a golden age." Let me continue in Plato's words. "I looked at this, you see, and at the men who were in politics, at the laws and customs. And the more I looked and the older I grew, the more difficult it seemed to me to administer political affairs justly. For you cannot do so without friends and comrades you can trust. In such men it was not easy to find. For the city, you see, no longer lived in the fashion and ways of our fathers. Eager as I had once been to go into politics, as I look at these things and saw everything taking any course at all with no direction or management, I ended up feeling dizzy. I did not abandon my interest in politics to discover how it might be bettered in other respects, and I was perpetually awaiting my opportunity. But at last, I saw that as far as all states now existing are concerned, they are all badly governed. For the condition of their laws is bad almost past cure, except for some miraculous accident. So I was compelled to say, in praising true philosophy, that it was from it alone that was able to discern any justice. And so I said that the nations of the world will never cease from trouble until either the true breed of philosophers shall come to political office or until that of the rulers shall, by some divine law, take the pursuit of philosophy." There you see in that wonderful and a kind of probing self-examination of his early motives and expectations, you see the disillusionment of the older Plato looking on what the Tyranny had done. But also looking at the states, the nations of his time, seeing their management, seeing their decay and conflict and saying and suggesting that no justice will ever be expected until, as he says at the end, kings become philosophers and philosophers kings, a direct reference to the Republic. This little autobiography, goes on at considerably greater length, I should say. But this provides a kind of introduction, as it were, to the Republic. We have in Plato's own words here, the way he viewed politics and his reasons for his political philosophy. Yet, in many respects, if the Republic was the result of comprehensive despair and disillusionment with the prospects of reform, the dialogue itself points back to an earlier moment in Plato's life and the life of the city of Athens. This remarkable letter was written when Plato was very old, approximately 50 years after the trial and execution of Socrates. But the action of the Republic takes place long before the defeat of Athens, before the rise of the Thirty and the execution of Athens . It refers to that period that Plato says in the letter looked like "a golden age, when many things seemed possible." That brings us back to the opening, the descent to the Piraeus. The action of the dialogue begins at the Piraeus, the port city of Athens, somewhere around the year 411, during what was called the Peace of Nicias, that is to say, the peace that endured a kind of respite, truce that was established during the fighting between Sparta and Athens. At the very beginning of the dialogue, we see Socrates and his friend Glaucon. What are they doing? What are they doing? Do you remember? What are they doing at the very beginning? Student: Professor Steven Smith: Where? Student: [inaudible] Professor Steven Smith: Right. Let me put it a slightly--yes, they are walking back to Athens from the Piraeus. But maybe to put it a slightly different way, they're trolling the waterfront. What is the Piraeus? It is the harbor of Athens. What do you expect from harbors? What are harbor cities like? What do you find down at harbors? Student: [inaudible] Professor Steven Smith: Water, yeah. They're seedy, aren't they? You find various kinds of disreputable and maybe unseemly things going on there. We are forced to ask ourselves: What are Socrates and Glaucon doing there? Why are they there together? What are they doing? What do they expect to find? These seem to be questions that immediately come to mind. We learn shortly afterwards that they have taken this descent to the Piraeus to view a festival, a kind of carnival. It sounds like something one might expect to see in a Fellini film. A kind of carnival, a carnivale, a Mardi Gras, where a festival is going on. What's more, a new goddess is being introduced into the pantheon of deities. This seems to suggest that--referring back to the Apology, that it is not Socrates, but the Athenians who innovate, who create and introduce new deities. Socrates remarks that the Thracians, the display of the Thracians, put on a good show, showing that his own perspective is not simply bound by that of a city. It suggests, from the beginning, a kind of loftiness and impartiality of perspective characteristic of the philosopher, but not necessarily the citizen. On their way back from this festival, from this carnival, on their way back they're accosted, you remember. They're accosted by a slave who's been sent on by Polemarchus and his friends and who orders Socrates and Glaucon to wait. "Polemarchus orders you to wait," the slave says. He orders you to wait. He is coming up behind you. Just wait. "Of course we'll wait," Glaucon replies. When Polemarchus and his friends arrive, we find that his friends include Adeimantus, who is Glaucon's brother and Niceratus, who is the son of Nicias, the general who has just brokered the peace that they are now enjoying. That's the famous Peace of Nicias. They challenge Socrates. "Stay with us or prove stronger." Stay with us or prove stronger. "Could we not persuade you?" Socrates asked. "Could we not persuade you to let us go?" "Not if we won't listen," Polemarchus says. Instead, they reach a compromise. But Socrates and Glaucon come with Polemarchus and the others to the home of Polemarchus' father, where dinner will be provided for them, and later, a return to the festival where there will be a horseback race. "It seems," Glaucon says, "we must stay." And Socrates concurs. Why does the book begin with this, let's say, opening gambit? Is it simply a ruse to get the reader's attention in some sense or to rope you in with some promise of what's to follow? Already from the very opening lines we see in this a clue to the theme that is going to follow. Who has the title to rule? Is it Polemarchus and his friends who claim to rule by strength of numbers? "Can we persuade you?" "Not if we don't listen," he says. Or Socrates and Glaucon, who hope to rule by the powers of reason, speech, and argument? Can we convince you? Can we persuade you? Can democracy that expresses the will of the majority, the will of the greater number be rendered compatible with the needs of philosophy and the claims to respect only reason and a better argument? That seems to be the question already posed in this opening scene. Can a compromise be reached between the two? Can the strength of numbers, as well as respect for reason and a better argument be, in some sense, harmonized? Can they be brought together? Is the just city, perhaps, that Socrates will later consider, a combination of these two--of both force and persuasion? That will be something left to see. But I think you can see the big themes of the book already very present in the opening scene of the dialogue. The first book is really a kind of preamble to everything that follows. Okay? Are you with me so far on this? Let's talk a little bit about the participants in this dialogue. It is a dialogue. It has a fairly large number of characters, although only a relatively few number of them speak in the book. Yet, it's something very important, as we would want to know in any play or novel or movie. We want to note something about the particular people who inhabit this dinner party that Socrates and Glaucon have been promised. Who are they and what do they represent? There is Cephalus, who we will see very quickly, the father of Polemarchus and whose home they are attending. The venerable paterfamilias, the venerable father of the family. Polemarchus, his son, a solid patriot who defends not only his father's honor, but that of his friends and fellow citizens. We will also see Thrasymachus, a cynical intellectual who rivals Socrates as an educator of future leaders and statesmen. Of course, it is the exchange between Socrates and Thrasymachus that is one of the most famous moments of the book. There is, in the first set of dialogues, a distinct hierarchy of characters, you might say, who we see later on express those distinctive features of the soul and the city. Cephalus, we learn, has spent his life in the acquisitive arts. That is to say, he's a businessman. He's been concerned with satisfying the needs of his body and making money. He represents what will later be called in the Republic the appetitive part of the soul, the appetites. Polemarchus, whose name actually means "warlord." Think of that. The warlord is preoccupied with questions of honor and loyalty. He tells us, to get a little bit ahead of ourselves, that justice is helping your friends and harming your enemies. He seems to represent what Plato or Socrates will later call the spirited part of the soul, something that we want to return to. Thrasymachus, a visiting sophist, seeks to teach and educate, anticipating what the Republic will call the rational soul, the rational part of the soul. Each of these figures, in many ways, prefigure the relatively superior natures of those who come later in the dialogue. The two brothers, Glaucon and Adeimantus, whose exchange with Socrates occupies, for the most part, the rest of the dialogue from Book Two onward, the two brothers who, incidentally, are the brothers of Plato. I should say, to my knowledge, we know nothing more about Glaucon and Adeimantus from history, but Plato put them into his dialogue. They will always be remembered as the two brothers in the dialogue. Again, they seem to represent something quite different. Bear this in mind as you are reading the book, because it is easy to kind of forget who's talking and what they represent. Adeimantus is, we will find, the kind of hedonistic and pleasure-seeking brother. Glaucon, whose name means something like "gleaming", "shining," is the fierce and war-like of the two brothers. Of course, there is the philosophically-minded Socrates. Again, each of them seems to represent, in a superior way, the key components of the human soul, the appetitive, the war-like or spirited, and the rational. Together, these figures form a kind of microcosm of humanity. Each of the participants in the dialogue represents one of the specific classes or groups that will eventually occupy the just city to which Plato or Socrates gives the name Kallipolis, the beautiful city. Alright? In the five minutes or so that remain, let's just talk for a moment about the first conversation with the head of the family, Cephalus. We don't need to look at this at great length. You can, I'm sure in your sections, you might want to talk about the arguments a little more specifically that are used in these first three sets of conversations between Cephalus, Polemarchus, and Thrasymachus. The question, more importantly-- the question, I don't know that it's more importantly, but the question that I want us to examine a little bit here in the time remaining is what, again, these characters represent. Cephalus, as his name implies, Cephalus. What does that mean? Do you know? Head, yes, Cephalus, head, the head of the household, but also clearly the claims of age, of tradition, of family. At the beginning of the dialogue when Polemarchus brought his friends back to the house, we see the aged father, Cephalus. He is just returning from prayer. He has just returned from performing certain acts of ritual sacrifice. He greets Socrates, in many ways, as a long, lost friend. Perhaps you have had this experience yourself, always a slightly uncomfortable one. When you bring a group of your friends back to your house, you're expecting to have a good time, and your grandparent is there and says, "Oh, it's so good to see a bunch of young people. I want to talk with you." It's always a slightly uncomfortable moment, you might say. We all have experienced this kind of thing. Everybody knows it from either end. I'm not a grandparent, thank god. But I feel the same thing often when my son brings his friends, maybe that I've known for a long time. "Oh, how are you doing?" and they want to get away. Socrates does something rather abrupt. "Tell me, Cephalus, what's it like to be so old?" "What is it like to be like you?" "Do you still feel the need for sex?" Can you imagine saying that to someone's grandfather? It gives you a little idea of the character of Socrates. Cephalus is so happy. "Oh, thank god I'm past that," he says. "Thank god I no longer feel this erotic desire. At my old age, I can spend my time--" "When I was a young man, that's all I did. I was thinking about sex all the time and when I wasn't thinking about that, I was making money. But now I've had my fill of both and I can spend my later years, the twilight of my life, turning to the things about the gods, performing sacrifices commanded by the gods." Why does Plato begin this way? Well, Cephalus is, as should be clear, the very embodiment of the conventional, in both senses of that term. He's not a bad man, by any means. But he is a thoroughly unreflective one. In attacking Cephalus as he does, Socrates attacks the embodiment of conventional opinion, the Nomos supporting the city. Note the way Socrates manipulates the dialogue, the conversation. Cephalus says that the pious man, the just man practices justice by sacrificing to the gods. Socrates turns that into the statement that justice means paying your debts and returning what is owed to you. Cephalus, in an easygoing manner, agrees and then Socrates says, "What would you think about returning a weapon that you had borrowed from a friend or someone who was in a very depressed"--we might say a depressed "frame of mind. Would that be just? How do you explain that? Would you do that if justice means paying your debts and giving back to each what is owed?" At that moment, Cephalus excuses himself from the dialogue and says, rather abruptly, "I have to go out and continue my sacrifices in the garden." Socrates, in other words, has broken the bond of tradition and traditional authority that holds the ancient city and the ancient family together. Cephalus is banished from the dialogue. Tradition is banished and we never hear another word about it for the next 400 or so pages. That's the way Socrates begins this dialogue, or that's the way Plato has Socrates begin it. We'll look a little more at some of these in our class for next time and then move into the characters of Adeimantus and Glaucon. Anyway, start your reading. Continue your reading. Your sections are going on this week, so enjoy yourselves.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
21_Democratic_Statecraft_Tocquevilles_Democracy_in_America.txt
Professor Steven Smith: I want to begin by talking a little about what is the question or what is the problem to which this immense book of which you're reading, I don't know, a couple of hundred pages maximum--What is the problem with which this huge book is concerned? It's always an important question to ask when you begin a new book. What question is the author trying to ask or what problem is he trying to deal with? Let me try to set up Tocqueville's problem in the following way. In the seventeenth and eighteenth centuries, the ideas of freedom and equality seemed to walk confidently hand in hand. Hobbes, Locke, Rousseau, who we've been reading, all believed that in the state of nature, remember, we were all born free and equal. As long as the enemy appeared to be the entrenched hierarchies of power and privilege of the old regime, of the old monarchical societies, freedom and equality were taken to be mutually reinforcing aspects of the emerging democratic order. But it was not until the beginning of the nineteenth century, with the emergence of the democracies or proto-democracies of Europe and the new world, that political philosophers, political thinkers began to wonder whether freedom and equality did not in fact pull in different directions. Tocqueville in particular, although you could add the names like Benjamin Constant or John Stuart Mill, but Tocqueville in particular saw the new democratic societies as creating new forms of social power, new types of rule that represented, in some ways, organized threats to human liberty. What were these new forms of social power? These were, for Tocqueville, the new middle class or what we might call bourgeois democracies, the new middle class democracies emerging in countries like France, England and of course the United States. And the problem for Tocqueville or his question, as it was for Locke and others before him, was how to mitigate the effects of political power. How does one control or mitigate for political power? Yes? Right? You can see that. Locke's answer, you recall, to this problem was to divide and separate the powers, separated powers, a theme clearly taken up and endorsed by the American constitutional framers. But Tocqueville was less certain that this kind of institutional device, so to speak, of checks and balances or separated powers, could be an effective or truly effective check in a democratic age where you might say the people as a whole had become king. He was less certain that institutional remedies alone could work. While 75 years before Tocqueville wrote Democracy in America, Rousseau had taken up the doctrine of popular sovereignty to be an ideal to be worked for, taking men as they are and laws as they might be. He looked to the doctrine of popular sovereignty as something that could be. But Tocqueville, again writing approximately 75 years after Rousseau, saw this doctrine, this doctrine of popular sovereignty that, for mid eighteenth-century Frenchmen had looked like a far flung utopian ideal, for Tocqueville this ideal had become an altogether political reality that had taken shape in the backwoods of Jacksonian America. Consider just the following passage from the Democracy. "In the United States," Tocqueville writes, "the dogma," he says, he calls it the dogma, "of the sovereignty of the people is not an isolated doctrine that is joined neither to habits nor to the sum of dominant ideas. On the contrary," he says, "one can view it as the last link in a chain of opinions that envelops the Anglo American world as a whole." And he goes on to say that extended to the entirety of the nation it becomes, it, this opinion, becomes the dogma of the sovereignty of the people. So here you have Tocqueville's view that this Rousseauian concept of popular sovereignty has become an existent reality. And for Tocqueville, there was no reason to believe that the new democratic states emerging, again, in America and in Europe, these new democratic states, ruled by the people--there was no reason to believe that they will be more just or less arbitrary than any other previous kind of regime. For Tocqueville no one, no person or body of persons, can be safely entrusted with political power and the united power of the people, the united sovereignty of the people, is no more for him a reliable guarantor of freedom than any other kind of regime. So the problem of politics, you might say, in age of democracy, is the problem of how to control the sovereignty of the people. For Rousseau, you remember, that was never really seriously a problem. The general will, he says, cannot err, the people when they are ruling in their collective capacities cannot be wrong but Tocqueville was less certain about this, whether or not the people, in their collective capacity as sovereign, are an infallible guide. The question is, what can be done about that? In aristocratic ages, you might say, the answer was simple. Tocqueville believed that in aristocratic times there were always countervailing centers of power. Kings, no matter how powerful, always had to contend with fractious and warlike nobilities but, again, who or what can exercise that countervailing power in a world where the people in their collective capacity have, to repeat, become the king? Who or what has the power to check the popular will or the general will? This is the problem, how to check democratic power. This is the problem that Tocqueville's new political science, what he boasts in the introduction to the book, is a new political science for a world itself quite new. This is the problem that he sets out to answer. And to this extent I would also say we, are all Tocqueville's children. We are all disciples of Tocqueville insofar as our political science continues to deal with the problem of the guidance and control of democratic government, how to, you might say, combine popular government with political wisdom. How to do that remains a problem, you might say, akin to squaring the circle but it remains the fundamental problem for democracies, how to combine popular rule with political wisdom. That was really what Tocqueville was concerned about. But before going on, let me ask, who was Alexis de Tocqueville? Let me tell you something about him. Tocqueville was born in 1805 into a Norman family from the north of France, from Normandy, with an ancient lineage. The Tocqueville estate still stands today and is owned still in the hands of members of the Tocqueville family. I know because I visited it a couple of summers ago and met his heirs although they are not actually the heirs of him directly. Tocqueville and his wife had no children. They belonged to one of the brothers' side of the family but absolutely charming people, exactly what you'd think French aristocrats would be like, all charm and grace, wonderful hosts. And Tocqueville was deeply attached to his ancestral home. In a letter from 1828, one of my favorite Tocqueville letters, he wrote to a friend, "Here I am," after returning from a trip abroad, a trip away, "Here I am, finally at Tocqueville," referring to the home simply by the family name. "I am finally at Tocqueville in my family's old, broken down ruin. At a league's distance I can see the harbor where William the Conqueror set sail for England. I am surrounded by Normans whose names figure among the conquerors. All of this I have to confess tickles my heart." So he comes from a line of people who trace their ancestry back to the Norman conquest and have been in that part of France for centuries. In fact, the Tocqueville home is a short drive away from the Normandy beach where the big D Day invasion took place during World War II. It's a miracle that the home still survives. Tocqueville's parents had been arrested during the French Revolution and were held in prison for almost a year and only the fall of Robespierre in 1794 saved them from execution. The young Tocqueville was born under the Napoleonic dynasty and spent his formative years, his adolescence and his school years, under the most conservative, if not to say reactionary, circles of post revolutionary France. Tocqueville studied law in Paris and during this time he made the acquaintance and friendship of another young aristocrat by the name of Gustave de Beaumont. And in 1830, for reasons that are not altogether clear, when he was 25 or so, in 1830, the two men received a commission from the new government of Louis Philippe, King Louis Philippe, to go to the United States to study the prison system there. The trip to the U.S. was occasioned by a grant you, might say, a fellowship, to study the American prison system. Tocqueville's journey to America, which has been extensively documented, lasted for just under a year from May 1831 to February 1832, and during that time he traveled as far north as New England, south to New Orleans. Yes, he was in New Orleans and went to the outer banks of Lake Michigan. The result of this visit was of course the two large volumes that he called Democracy in America, Democratie en Amerique. The first volume appeared five years, four years or so after his trip, in 1835 when its author was only 30 years old. The second volume appeared five years later in 1840 and both of those volumes are contained within the single volume that you have. Tocqueville's trip has been much studied and much admired. Even just in very recent times, a French philosopher, by the name of Bernard Henri Levy, came over, didn't exactly follow Tocqueville's journey but traveled throughout America, a kind of Frenchman's guide, a sort of Borat's America almost, going to Las Vegas and evangelical churches and all of this stuff, and wrote a very interesting book called American Vertigo. The most charitable thing I can remark is that he was no Tocqueville but, leaving that aside, it was an admirable effort. Democracy in America, to put it simply, is the most important work about democracy that you will ever read. To compound the irony, the most famous book on American democracy was written by a French aristocrat who might have been deeply foreign, if not hostile to the manners, customs, habits of a democratic society. And from the time of its first publication in 1835, the book was hailed as a masterpiece. John Stuart Mill called the book a masterpiece that has at once, he says, taken its rank among the most remarkable productions of our time. Tocqueville has come to take his side, his place alongside of Washington, Jefferson and Madison almost as if he were an honorary American. And, as if this were not enough, a recent translation of the book was recently inducted into the prestigious Library of America series which seems to put the stamp of naturalization on a book written in French for Frenchmen and yet it is part of the prestigious Library of America. As Tocqueville might have said, go figure. I don't know how to say that in French actually. But there is a textbook image of Tocqueville according to which he came to America as a kind of blank slate and the experience of American democracy had a profoundly transformative influence over the young aristocrat. But nothing I would suggest to you could be further from the truth. In a letter to his best friend, a man named Louis de Kergolay, whose home, whose estate is actually directly next door to the Tocqueville estate--In a letter to Tocqueville written just before the publication of the first volume of the Democracy in 1835, Tocqueville describes his purpose in writing his book in these terms. Let me read from his letter. Tocqueville writes, "It is not without having carefully reflected that I decided to write the book I am just now publishing. I do not hide from myself what is annoying in my position. It is bound to attract active sympathy from no one. Some will find that at bottom I do not like democracy and am severe toward it but others will think I favor its development immoderately. It would be most fortunate for me if the book were not read and that is a piece of good fortune that may perhaps soon come to pass. I know all that but here is my response. Nearly 10 years ago, I was already thinking about part of the things that I have just now set forth. I was in America only to become clear on this point. The penitentiary system was a pretext." So two points, I think, bear comment about this remarkable statement of his purpose to his friend Kergolay. First is that Tocqueville indicates that his idea for the book had already, as he says, begun to germinate five years before his trip to America. He says the penitentiary system, the penitentiary project for which he was sent over, was only a pretext, he said. He already had begun to speculate on these things, he says, 10 years before his trip. Now, if you do the math, when you consider that he was 30 years old in 1835 when the book's first volume was published and he said he was speculating on these things already 10 years before, it would seem that the germ of the idea for the book, the germ idea, the germ cell for the book, had occurred to Tocqueville when he was only 20 years old, that is to say about the age that most of you are here. And he went to America only to confirm what he had begun to suspect when he was at the age of a contemporary undergraduate. Think of that. Get your idea now. Get it quickly. Then maybe you can write a famous book by the time you are 30. I have to tell you it is way, way beyond that stage for me. Hobbes, however, did not write his masterpiece until he was 63 so there's still hope for some of us. Nevertheless, the second point I would make about that letter is that it is also clear that Tocqueville was writing his book not for the benefit of Americans, who you will discover he thought had little taste for philosophy, but for Frenchmen. In particular, he was hoping to persuade his fellow countrymen, who were still devoted to the restoration of the monarchy, that the democratic social revolution that he had witnessed in America represented also the future of France. If John Locke had said in his Second Treatise, when Locke had said "in the beginning all the world was America," Tocqueville's point appears to be in the future all of the world will be America. His attitude towards what he saw or what we would perhaps call today Americanization, democratization--his attitude towards this was one of skepticism mixed with fear. "I confess," he writes, "that in America I saw more than America. I sought there an image of democracy itself or its penchants, its characters, its prejudices, its passions. I wanted to become acquainted with it if only to know at least what we ought to hope or fear from it." And that sentence is so typical of Tocqueville, the way he piles on the descriptive labels, "its penchants, its character, its prejudices, its passions." So there are embedded in that two questions, two you might say subordinate clauses or two sub questions that Tocqueville set out to answer. The first concerns the gradual replacement of the ancien régime, that is to say, the French term for the old, aristocratic order based on privileges, hierarchy, deference and inequality, with a new democratic society based on equality. How did this happen and what brought it about, this immense social and political transformation from the old regime, from an age of inequality to an increasing age of equality, a huge example of what we might call today regime change or regime transformation? And the second--How did that happen? And the second not perhaps explicitly asked question, but nevertheless a question on virtually every page of Tocqueville's book, concerns the difference between the form democracy has taken in America and the form it took in France during their revolutionary period. Why, Tocqueville asks, has American democracy been relatively gentle or mild? Those are two characteristic Tocquevillian terms. Why has American democracy been what we might call today a liberal democracy and why did democracy in France veer dangerously close towards terror and despotism during its revolution? That was the second question Tocqueville set out to answer. Tocqueville believed it to be virtually a providential fact of history that societies were becoming increasingly democratic, increasingly egalitarian, we might say. What is not certain, you could say, is what form democracy will take. Whether democracy will be compatible with liberty or whether it will issue into a new kind of despotism remains a question that only the statesmen of the future will be able to answer. And from these two questions, "how did this transformation occur" and "what form will democracy take in the future," from these two questions, we can see that Tocqueville wrote his book as a political educator, that Tocqueville takes his place along with people like Plato, Aristotle, Machiavelli, and the others as a great political educator. He was more than a mere chronicler of American manners and customs but rather was an educator of future European statesmen hoping to steer their countries between the shoals of revolution and reaction. How did Tocqueville attempt to accomplish this? Let me try to talk for a moment about what he hoped to teach because you have to admire the book and have to understand it as an immense handbook, almost as it were of the education of a democratic statesman, slightly larger than Machiavelli's Prince perhaps but nevertheless a handbook for state craft nonetheless. What did he hope to teach? Near the end of the introduction, and I pay special attention to that fascinating introduction to the book--I don't mean the translator's introduction. I mean Tocqueville's own introduction. Near the end of the introduction, he writes the following sentence. "I think those who want to regard it, namely his book, who want to regard it closely will find in the entire work a mother thought that, so to speak, links all the parts, a mother thought. His word, term is an idée mere, a mother idea. What is this mother idea or mother thought to which he refers there? And the most likely candidate for its central idea is the idea of equality. The opening sentence of the book reads: "Among the new objects that attracted my attention during my stay in the United States, none struck my eye more vividly than the equality of conditions. The equality of conditions. What does he mean by that phrase? What is meant by "equality" here? Note, in the first instance, that Tocqueville speaks of equality as a social condition, an equality of conditions, not a form of government per se. This is in part an expression of what you might think of as Tocqueville's sociological imagination. Equality of conditions precedes democratic government. Equality of conditions, you might say, is the cause from which democratic governments arise. Equality of conditions were planted both in Europe and in America long before democratic governments arose in either place. Democratic governments are only as old--at least in France and America, democratic governments are only as old as the American or French revolutions but equality of conditions had been prepared by deep rooted historical processes that began long before the dawn of the modern age. So equality of conditions refers to a social fact, not a form of government, and which precede democratic government by long periods of time. And in the introduction to his book, again, Tocqueville gives a brief, I would say very brief, history of equality, taking it as far back to the heart of the medieval world, some 700 years he says. Unlike Hobbes or Rousseau, he does not invoke a state of nature as a way of grounding equality. In fact, for Tocqueville what Hobbes and Rousseau believed that we are by nature free and equal and only over time they believed were social hierarchies and inequalities introduced, Tocqueville argues exactly the opposite point of view. The historical process, so to speak, has been moving away from inequality and towards greater and greater equality of social conditions. The historical process, at least as Tocqueville traces it out here, has been a process of gradual equalization of social conditions. His equality is something like an historical force, something that has been working itself out in history over a vast stretch of time, and he often writes as if equality is not just one fact among others but is what he calls a generative fact from which everything else derives. "As I studied America more and more," he writes, "I saw in the equality of conditions the generative fact from which each particular fact seems to issue," he says in the second paragraph of his introduction, "the generative fact from which each particular fact seemed to issue." Tocqueville writes here about equality as an historical fact that has come to acquire an almost providential force over time. And he uses this term "providence" here several times throughout the introduction. He uses that term not so much to describe God, as one might think, but rather to describe a sort of universal historical process that is working itself out, so to speak, even against the intentions of individual social and political actors. The gradual spread of the conditions of equality, he believes, has two characters or two characteristics of providence. It is universal, he says, and it always escapes the power of the individual to control. If Machiavelli believed we can control fortuna, we can control providence or chance half the time, Tocqueville seems to believe that the process of equality always escapes the powers of human control. It is the very power of equality that makes it seem to be an irresistible force. Rather than the product of the modern age alone, again, Tocqueville shows how the steady emergence of equality of conditions has been the central dynamic of European history over several hundreds of years. He frames the book within you might say a very large-scale sort of philosophy of history in which democracy, equality, and the gradual equalization of social conditions are the sort of central motifs. So it is in order to understand that process that Tocqueville turns to America of the 1830s. "There is only one country in the world," he says, "where the great social revolution I am speaking of seems nearly to have attained its natural limits," one country where this social revolution, the democratic transformations from the old aristocratic order to the new democratic age, seems to have reached its natural limits, and that country is of course the United States. In this context, I think, it is very revealing that he chose to call his book Democracy in America and not simply American democracy. Think about that choice of title for a moment. His point, I take it, is that it's not that democracy is a peculiarly American phenomenon, far from it. His point is, when he's describing democracy in America, here is the form that the democratic revolution has taken in America. What form it will take elsewhere or it may take elsewhere is by no means predetermined. Democracy is not a settled or fixed condition. It is something more like a process and when we think of the way we speak today when we talk about democratization, the process of democratization, you can see that democracy seems to be less a settled or fixed or determinate kind of regime than a kind of process. It has the quality that Rousseau referred to in the Second Discourse as perfectibilité, perfectibility, that is to say an almost infinite elasticity and openness to change. Again, it is less a determinate political or social order than a continual work in progress and that is the way Tocqueville looks at America, in some ways, or looks at the future of democracy when he says, "I look at the place where it seems to have attained its natural limit" but what form it may take elsewhere is by no means to say that the form it takes in America is the form it will take anywhere else." Democracy is the regime that seems to be almost infinitely elastic in terms of its possibilities and this I think is a profound and astute observation about the nature of democratic government. We do not know where the process of democratization will end any more today than we did in Tocqueville's time or that Tocqueville knew. It is a matter for statecraft and leadership and political thought. Again, will future democracies be liberal and freedom loving or will they be harsh and rebarbative? That is a question that we are now seeing very upfront and close in various parts of the world today that are undergoing their own very tempestuous transitions to democracy as we will say and it remains very much an open question, what form those democracies will take. That question is at least as important for us, if not more so, than it was for Tocqueville. What Tocqueville is sure about, however, is that the fate of America is in some way the fate of Europe and maybe for that matter the fate of the rest of the world. "It appears to me," he says, "beyond doubt that sooner or later we shall arrive like the Americans at an almost complete equality of conditions." He says, in the introduction, that we shall arrive, speaking to his French audience, a shocking statement again to members of his class and of his family background, that sooner or later we too will arrive at this complete equality. He seems to ask the reader, "Do you like what you see, what I describe? What form democracy will take elsewhere will be very much dependent upon circumstance and statesmanship." Again, his is an attempt to educate statesmen for the future. Let me say a few words, and I will not finish this today but we'll take it up--continue this a bit on Wednesday, about what were the characteristics of American democracy, what constitutes, as it were, democracy American style as Tocqueville understood it, given that, again, democracy has no single determinate form but is characterized by a considerable degree of elasticity and openness, what are the features that are constitutive of American democracy. Condensing a vast amount of material from-- especially from volume one of Democracy, there are three features that I want to emphasize about the unique characteristics of American democracy that lead to, again, making it mild, gentle or what we might call a liberal democracy. These are: local government, civil associations, and what Tocqueville calls the spirit of religion, and I want to talk about each of these three in turn. I'll only probably talk about the first one here, local government, one of the parts of Tocqueville's book for which he is most famous. The first and, in many respects, most fundamental feature of American democracy is the importance that Tocqueville attributes to local government and local institutions, the importance of localism, local democracies, and you might say, the spirit that emanates from it is the spirit--is the key to the whole. The cradle of democracy is to be found in what Tocqueville calls the commune or what in our translation is called the township, the township democracy. "It is nonetheless in the township," he writes, "that the force of free peoples resides. The institutions of a township are to freedom what primary schools are to science. They put it within the reach of the people." Does that sound at all familiar? I think it should, in some respects. Tocqueville's description of the New England township, put within the reach of all the people, clearly demonstrates the influence on Tocqueville of Rousseau's account of the general will in the Social Contract. Right? It is the people organizing, legislating, and deliberating over their common interests that is the core of liberty. Tocqueville very much views the American experience of local democracy through the lenses shaped or crafted by Rousseau and this is hardly fortuitous. In a famous letter to his friend Kergolay, Tocqueville admits that Rousseau was one of three writers with whom he spent some time every day. He read Rousseau every day, the other two, Montesquieu and Pascal, but it was Rousseau more than any other figure who, again, helped him understand the democratic experience and particularly this experience of the township. Yet, in some ways, Rousseau--Tocqueville combines Rousseau--his reading of the township, his Rousseauian reading of the township, with a kind of Aristotelian twist. The township, he writes, he continues in that same passage, is the sole association that is so much in nature, he says, that everywhere men are gathered, a township forms of itself. That term "in nature," it is the sole association so much in nature, should alert you to a kind of Aristotelianism in what Tocqueville is saying. The township is here said to be a product of nature. It eludes, he writes, the effort of man. The township exists by nature but its existence is far from being guaranteed. It is fragile and it is uncertain. It is continually threatened by invasions, not necessarily by foreign powers but from larger forms of government, state and federal government. The township is continually threatened by federal and national authority. And Tocqueville adds, with a definite hint of Rousseau, that the more enlightened the people are, the more difficult it is for them to retain the spirit of the township. Think of that. The more enlightened they are. The township relies on a certain kind of spirit of local sturdy and steady habits, not necessarily enlightened opinion. That spirit of local freedom, again, goes hand in hand with a kind of rustic, even primitive manners and customs that clearly Rousseau would have admired and for this reason he laments that the spirit of the township no longer exists in Europe where the process of political centralization and the progress of enlightenment have virtually destroyed the conditions for local self-government. I'm going to end on that note and Wednesday we're going to show a little movie again, a little piece from a movie about--just a very, very short clip which will illustrate the theme of civil associations in democracy and we'll go on to talk about religion and then some other parts of Rousseau. Well, welcome back. It's nice to see you all here.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
8_The_Mixed_Regime_and_the_Rule_of_Law_Aristotles_Politics_IV.txt
Professor Steven Smith: Okay, where are we? Today, we're going to study--I'm going to talk about Aristotle's--you might call it Aristotle's comparative politics and focusing on the idea of the regime. This is the theme that you remember in the opening day I said was really the central concept or the leading thread of this course and it's in books III through VI of Aristotle's Politics that he develops his idea of the regime and regime politics. Book I, that we spoke about last time, really in a way tells us something about the--you might say almost the metaphysics of Aristotle's politics. Today Aristotle speaks more empirically, more politically about what a regime is. His idea of regime politea, again, the same word, the same word that was used for the title of Plato's Republic is the centerpiece of Aristotle's politics literally. It occupies the theme of the middle three books, books III through VI. These books are difficult in many ways; they're complicated. They're not everybody's favorite part of the book, but they are my favorite part because it tells us more precisely than anywhere else how Aristotle understands the nature of politics and that after all is what we are most interested in. A regime refers to both the formal enumeration of rights and duties within a community, but it also addresses something closer to what we would call the way of life or the culture of a people. Their distinctive customs, manners, laws, habits, moral dispositions and sentiments, and Aristotle's constitutional theorizing begins by asking a simple question. What is the identity of a city? What gives it its identity and enduring existence over time? His answer is the regime; the regime is what gives a people and a city its identity. Aristotle distinguishes between what he calls the matter and the form of the regime. Let me examine both of these in turn. The matter, the substance, the material basis of a regime concerns its citizen body. That is to say the character of those who constitute a city and here he rejects a number of alternatives for what constitutes a citizen body. He rejects the idea that the city is defined simply by a group of people who inhabit a common territory, the same space as it were. The identity of a polis he writes is not constituted by its walls. That is to say, it is not constituted by geography alone, and similarly, he rejects the idea that a regime can be understood as a defensive alliance against invasion by others. In our terms, for example, NATO would not be a regime, a purely military or defensive alliance. Finally, he denies the possibility that a regime exists that whenever a number of people come together to establish commercial relations with one another, organizations like NAFTA, or the WTO, the World Trade Organization do not a regime make. A regime cannot be understood simply as a commercial alliance. What is a regime then? It is evident Aristotle said, is that a city is not a partnership in a location or for the sake of not committing injustice against one another, or for transacting business, so what is a citizen body? The citizens who constitute a regime, he tells us, do more than occupy the common space but are held together, according to Aristotle, by bonds of common affection. It is affection, loyalty and friendship that make up a regime. This sort of thing he says, this political partnership is the work of affection, philia is his word, is the work of affection. "Affection is the intentional choice of living together." 1280a, if anyone's interested. "It is the intentional choice of living together." Friendship, he writes, "is the greatest of good things for cities, for when people feel affection for each other they are less likely to fall into conflict." But what kind of friendship is he talking about? Is it the kind of friendship that you feel for your best friend, or for your parents or siblings? What kind of a friendship are these bonds of affection, that he says hold the city together and that make it a regime? Political friendships, he tells us, are not the kind of thing that require us to forego our own individual identities in a way that one might find in passionate relations of love, right? Rather, they presuppose relations, that is to say political relations, not between lovers or even best friends of some kind, but between civic partners who may in fact be intensely rivalrous and competitive with one another for positions of political office and honor. Civic friendship, civic philia is in other words not without a strong element of what might be thought of as sibling rivalry in which each citizen strives to outdo the others for the sake of the civic good. Many of you have siblings and know a little bit about what sibling rivalry is like. Siblings, as everyone knows, may be the best of friends, but this does not exclude strong elements of competition, rivalry, and even conflict for the attention of the parents, and fellow citizens, for Aristotle, are like siblings, each competing with one another for the esteem, the affection, and the recognition of the city that serves for them as a kind of surrogate parent. That is the way that Aristotle understands a civic body, a citizen body. So that when he says that citizens are held together by ties of common affection he means something very specific. The civic bond is more than an aggregate of mere self-interest or rational calculation as was going to be defended by someone like Thomas Hobbes or by most of today's modern economists who believe that society can be understood simply as a series of rational transactions between buyers and sellers of different goods and that can be modeled along some kind of game theoretic lines. Aristotle denies this, explicitly denies this. He seems to have known something about the modern economic theory of society long before modern economics was even developed. But again, when Aristotle speaks of the kinds of affection that hold a citizen body together, he does not mean anything like the bonds of personal intimacy that characterize private friendships. What he means, when speaking about civic affection, is more like the bonds of loyalty, camaraderie that hold together members of a team or a club. These are more than, again, ties of mutual convenience. They require loyalty, trust, what social scientists today sometimes call social capital, that successful societies require social capital. A distinguished political scientist at another university, I will not mention its name here, at another university, has spoken about the importance of social capital or trust as a sort of basic relation, the basic component of a healthy democracy. Aristotle knew that, he didn't use a kind of ugly social scientific word like social capital; rather he spoke about civic friendship and philia. The political partnership he says must therefore be regarded as being for the sake of noble actions and not just for the sake of living together. The city, as he likes to say or the regime exists not merely for the sake of life but for what he understands to be the good life, the life of friendship, the life of again, competitive relations for positions of honor and office. So we can say that a regime is in the first instance constituted by its citizen body. Citizens are those who share a common way of life. The citizen in an unqualified sense, Aristotle writes, is defined by no other thing so much as sharing in decision and office. Or, as he puts it a little bit later, whoever is entitled to participate in an office involving deliberation or decision-making is a citizen of the city. Listen to the words he uses there in describing a citizen. A citizen is one who takes sharing in decision and office, who participates in deliberation and decision-making. A citizen is one therefore who not only enjoys the protection of the law, is not merely you might say a passive beneficiary of the protection of society and its laws, but is one who takes a share in shaping the laws and who participates in political rule and deliberation. Aristotle even notes, you probably observe, that his definition of the citizen, he says, is most appropriate to citizens of democracy, where in his famous formulation everyone knows how to rule and be ruled in turn. It is this reflection and the character of the citizen that leads him to wonder whether the good citizen and the good human being are one and the same. Can a person be both, as it were, a good man, a good person and a good citizen? Famous discussion in Aristotle's book; Aristotle's answer to this is perhaps deliberately obscure. The good citizen, he tells, us is still relative to the regime. That is to say, the good citizen of the democracy would not necessarily be the same person, or the same kind of person as the good citizen of a monarchy or an aristocracy. Citizen virtue is relative, or we might say, regime relative. Only in the best regime, he says, will the good citizen and the good human being be the same. But what is the best regime? At least at this point he has not told us. The point he's trying to make is there are several kinds of regimes and therefore several kinds of citizenship appropriate to them. Each regime is constituted by its matter, that is to say, by its citizen body as we've been talking about, but also now by its form, by its formal structures. That is to say every regime will also be a set of institutions and formal structures that give shape to its citizens. Regimes or constitutions you might say are forms, or formalities that determine how power is shared and distributed among citizens. Every regime is an answer, consciously or not, to the oldest political question of all, who governs? Who should govern? Every regime is an answer to that question because every regime sets forward a way of distributing, formally distributing powers and distributing offices among its citizen body. So we move now from the matter of the regime, as to what constitutes its citizens and its citizen body, to the question of the form of the regime, its forms, its formalities, its structures and institutions you might say. Entirely too much of modern political science is focused on simply the forms and formalities of political life, not enough, in my opinion, with questions of the citizen body and what makes, what constitutes, the character or the virtue in Aristotle's terms of its citizens. But nevertheless, Aristotle gives extraordinary importance and attention to the forms or formalities that make up a regime. What does he mean by that? Aristotle defines the strictly formal criteria of a politea twice in his politics and I'm sure you noted both times where they appeared? Yes. Book III, chapter 6, famous definition: "The regime," he says, "is an arrangement of a city with respect to its offices, particularly the one who has the authority over all matters. For what has authority in the city is everywhere that governing body, and the governing body is the regime." The regime is an arrangement of a city, he says, with respect to its offices and every city will have a governing body, that governing body being a regime. The second definition appears at the beginning of Book IV, chapter 1. "For a regime," he writes, "is an arrangement in cities connected with offices, establishing the manner in which they have been distributed, what the authoritative element of the regime is, and what the end of the partnership is in each case, a similar but slightly different definition of what constitutes the formal structure of regime politics." But from these two definitions appearing in book III, chapter 6 and Book IV, chapter 1 we learn a number of important things. First, is to repeat, a regime concerns the manner in which power is divided or distributed in a community. This is what Aristotle means when he uses the phrase, "an arrangement of a city with respect to its offices." In other words, every regime will be based on some kind of judgment of how power should be distributed to the one, to the few or the many to use the Aristotelian categories of political rule or some mixture of those three classes that constitute every city. In every regime one of these groups, he says, will be the dominant class, will be the dominant body, the ruling body, as he says, in that definition and that ruling body will in turn, he says, define the nature of the regime. But Aristotle tells us something more than this. A regime, his regime typology is, to say, his division of power, his division of regimes and to the rule of the one, the few and the many is based not only on how powers are distributed in a purely factual way, he also distinguishes between regimes that are well ordered, well governed, and those that are corrupt. What does he mean in terms of this distinction? Aristotle's distinction seems to be not only empirical, again, based on the factual distribution of powers. It seems to have a--what we might call today a normative component to it, it makes a distinction or a judgment between the well-ordered and the deviant regimes, the corrupt regimes. On the one side, he tells us, the well ordered regimes are monarchy, aristocracy and what he calls polity, rule of the one, the few, and the many, and on the corrupt side he calls, he describes them as tyranny, oligarchy and democracy also ruled by the one, the few, and the many. But what criteria, we want to know, does he use to distinguish between these, as it were, six-fold classification of regimes? How does he distinguish the well-ordered regimes from the corrupt regimes? Here is where Aristotle's analysis gets, in some ways, maddeningly tricky because in many ways, of his general reluctance, to condemn any regime out of hand. If you were to read more than I had assigned for you in class, if you were to read throughout, through all of Book VI for example, you would find Aristotle not only giving advice to Democrats and democracies and other regimes on how to preserve themselves, you would find a lengthy description of how tyrants should moderate, or how tyrants learn to preserve and defend their own regime. It seems as if, it seems almost as if, living before the incarnation of pure evil in the twentieth century with the rise of modern totalitarianisms, that Aristotle seemed to think that no regime was so bad, no regime was so devoid of goodness that its preservation was not worth at least some effort, think of that. Rather, in many ways, he provides reasoned arguments for the strengths and weaknesses of several different regime types. Let's consider the one that's closest to our own, democracy, let's consider what Aristotle has to tell us about that regime. In fact, it would be an interesting question for people to consider, to know how would Aristotle confront or what would his analysis be if a regime like Hitler's Germany, Stalin's Russia, the Iran of Khomeini, regimes that are clearly tyrannies but do they even go beyond in some way, the tyrannies that Aristotle spoke about and what kind of advice, what would he have to say about them? Anyway let's think about democracy. Interestingly, we find Aristotle defending democracy on the grounds that it may contain collectively greater wisdom than a regime ruled by the one or the few. In Book III, chapter 11, for example, he writes, "For because they are many," that is to say the citizen body, the ruling body of the democracy, "each can have a part of virtue and prudence and on their uniting together, and on their joining together he says, "the multitude with its many feet and hands and having many senses becomes," he writes, "like a single human being, and so also with respect to character and mind." Think of that, the people in a democracy he says, "coming together, uniting together, become like a single human being with many hands and feet," and he says, "with greater character and mind." We even hear more than any single individual, and then, in the same text, we also hear Aristotle praising the practice of ostracism, that is to say exiling, banishing those individuals deemed to be pre-eminent in any particular virtue or quality. He makes a similar point in Book III, chapter 15, in describing the process of democratic deliberation as a superior means of arriving at decisions. He compares it to a potluck dinner; any one of them, he says, that is to say any one of the citizens, taken singly is perhaps inferior in comparison to the best. But the city is made up of many persons, just as a feast to which many contribute is finer, is better, than a single and simple one and on this account a crowd also judges many matters better than a single person. Furthermore, what is many, he says, is more incorruptible like a greater amount of water than many is more incorruptible than the few. So he gives there a powerful argument in defense of democracy, like a potluck dinner; each individual cook may not be as good as the best chef but many taken together will provide many more dishes and many more variety, for a variety of tastes than does a single chef. He says, furthermore, a crowd, the many, is more incorruptible than the few. Less light incorruptible, here, I take it in a kind of ordinary sense of the term, less susceptible to bribery, you can't bribe a lot of people in the way that you can a single individual. Are Aristotle's views on democracy correct here in his analysis? Do in fact many chefs make for a better dinner than a single chef? Well, I don't know, would you rather have dinner at the Union League with one chef, a master chef or would you rather have dinner with a bunch of your friends each providing some piece of the dinner? Well, it's an interesting argument; it's open to debate anyway. Yet at the same time, is Aristotle seen defending democracy, providing reason and many sensible arguments for democratic regimes? You find him, in the same section of the book, providing a defense of kingship and the rule of the one. In Book III, chapter 16, he considers the case of the king who acts in all things according to his own will. Sounds like a kind of absolute monarch of some kind; this is the part of Aristotle's politics that seems closest in a way to the idea of a platonic philosopher-king, a king who rules without law and rules for the good of all, simply on the basis of his own superiority. Aristotle coins a term for this kind of king overall, he calls it the pambasileia, baseleia being the Greek word for king, like the name Basil, it's the Greek word for king and pan meaning universal, pambasileia, the universal king, the king of all. Aristotle does not rule out the possibility of such a person emerging, a person of, what he calls excessive virtue, almost hyperbolic excellence, he says, who stands so far above the rest as to deserve to be the natural ruler overall. But how, we want to know, does Aristotle reconcile his account of the term baseleia, the king of overall, with his earlier emphasis upon democratic deliberation and shared rule, the citizen, recall, is one who takes turn ruling and being ruled in turn. When readers look at Aristotle's account of kingship and particularly this notion of the pambasileia, the king overall, this suggestion must at least occur that there is a hidden Alexandrian or Macedonian streak to Aristotle's political thinking that owes more to his native Macedon than to his adopted Athens, the idea of universal kingship. Think of Alexander the Great later on, and in fact, in one of my favorite passages in the book, which you will read for next time, I cannot resist quoting already a passage from Book VII, and near the end of the book, Book VII, chapter 7, where Aristotle writes as follows. He writes, "The nations in cold locations, particularly in Europe, are filled with spiritedness." There is that platonic word again, thumos, are filled with thumos, "but lacking in discursive thought," lacking in the deliberative element in other words. Hence, they remain free because they're thumotic, but they lack political governance. "Those in Asia, on the other hand," he writes, thinking probably here of Persia, places like Egypt and Persia, "have souls endowed with discursive thought but lack spiritedness, lack thumos, hence they remain ruled and enslaved." But then he goes on to say, "The stock of Greeks share in both, just as it holds," he says, "the middle in terms of location. For it," that is to say the Greeks, "are both spirited, are both thumotic and endowed with deliberative thought, and hence, remained free and governed itself in the best manner." "And," he writes and he concludes, "at the same time is capable of ruling all should it obtain a single regime." That these Greeks are capable of ruling all, he says, all, who is all? What does he mean by the all here? The Greeks? The rest of the world? Should our--are capable of attaining it seems a single hegemony, a single regime, are if in fact, circumstances developed. So here is a passage in which Aristotle clearly seems to be pointing to the possibility of a kind of universal monarchy under Greek rule, at least as a possibility. This passage I read at length, is important for a number of reasons, let me just try to explain. In the first place, it provides us with crucial information about Aristotle's thinking about the relations of impulse and reason, of thumos and reason, as you might say the determinants of human behavior or the crucial pet term in that passage is this, again this platonic term spiritedness which is both a cause of the human desire to rule and at the same time a cause of our desire to resist the domination of others. It is the unique source of human assertiveness and aggressiveness, as well as the source of resistance to the aggression of others. It's a very important psychological concept in understanding politics. And second, the passage tells us something about certain additional factors. Extra, in many ways, extra-political factors such as climate and geography as components in the development of political society. Apparently, quality such as thumos and reason, thumos and deliberation, are not distributed equally and universally. He says, he distinguishes, between the people's of the north, he calls them the Europeans, spirited and war-like but lacking thumos; those of Persia and Egypt containing highly developed forms of intellectual knowledge, no doubt thinking about the development of things like science and mathematics in Egypt but lacking this quality of thumos which is so important for self-government, for self-rule. These are, one might think about this, these things, he says, are at least in part determined by certain kinds of natural or geographic and climatic qualities. A modern reader of this passage that comes to mind is Montesquieu, in his famous book, the Spirit of the Laws, with its emphasis upon the way in which geography and climate, and environment become in part determinants of the kind of political culture and political behavior exhibited by different peoples. Finally, this passage tells us that under the right circumstances, at least Aristotle suggests, the Greeks could exercise a kind of universal rule, if they chose. He does not rule out this possibility. Perhaps it testifies to his view that there are different kinds of regimes that may be appropriate to different kinds of situations, to different situations. There is no one-size-fits-all model of political life, but good regimes may come in a variety of forms. There seems to be at least built in to Aristotle's account of politics, a certain flexibility, a certain latitude of discretion that in some passages even seems to border on a kind of relativism. But nevertheless, Aristotle understands that a person, this pambasileia, this person of superlative virtue is not really to be expected. Politics is really a matter of dealing with less than best circumstances which is perhaps one reason why Aristotle gives relatively little attention to the structure of the best regime. Such a regime, which I do want to talk about Wednesday is something to be wished for, but is not for practical purposes something to which he devotes a great deal of time. Most regimes, and for the most part, will be very imperfect mixtures of the few and the many, the rich and the poor. Most regimes, for the most part, most politics for the most part, will be struggles between what he calls oligarchies and democracies, rule by the rich oligarchies, ruled by the poor democracies. In that respect, Aristotle seems to add an economic or sociological category to the fundamentally political categories of few and many. The few are not simply defined quantitatively but they are defined, as it were, also sociologically. The rich, the poor, again defined as, the many and defined by him as the poor. It was not, you have to see when you read these passages, it was not Karl Marx but rather it was Aristotle who first identified the importance of what we would call class struggle, in politics. Every regime is in many ways a competition between classes. But where he differs from Marx, is not that he believes that the fundamental form of competition between classes is not just for resources, it is not a struggle over who controls what Marx calls the means of production, it is a struggle over positions of honor, of status and position, of positions of rule. Struggle is, in short, political struggle not economic struggle. Every regime, he believes, will be in some ways a site of contestation with competing claims to justice, with competing claims to political rule for who ought to rule. There is, in other words, not only a partisanship between regimes, but partisanship within regimes, where citizens are activated, different groups of citizens, different classes of citizens are activated by rivalrous and competing understandings of justice and the good. The democratic faction, he tells us, believes because all are equal in some respects, they should be equal in all respects. The oligarchs, he tells us, because people are unequal in some respects, they should be unequal in all respects. For Aristotle the point and purpose of political science is to mediate the causes of faction, to help causes of faction that lead to revolution and civil war. Aristotle's statesman, Aristotle's statecraft, his political science, is a form of political mediation, how to bring peace to conflict ridden situations. It is always surprising to me that many people think that Aristotle ignored or has no real theory of political conflict when it seems to me conflict is built in to the very structure of his understanding of a regime. And again, not just conflict between rival regimes but conflict built into the nature of what we would call domestic politics, different classes contending with different conceptions of justice and how can the political scientist bring peace, bring moderation to these deeply conflict ridden situations? Aristotle proposes--how does he propose to do this? He proposes a couple of remedies to offset the potentially warlike struggle between various factions. And the most important of these remedies is the rule of law. "Law insures," he says, "the equal treatment of all citizens and prevents arbitrary rule at the hands of the one, the few, or the many." Law establishes what he says is a kind of impartiality for law, he says, is impartiality. "One who asks the law to rule," he says, in Book III, chapter 16, "is held to be asking God and intellect alone to rule while one who asks man, asks the beast. Desire is a thing of this sort, and spiritedness," he writes again, "thumos, spiritedness perverts rulers and the best men, hence law is intellect without appetite. Even the best men," he says, "can be perverted by spiritedness. Law is the best hedge we have against the domination of partiality and desire." But this is not the end of the story. In fact, it is only the beginning. Aristotle raises the question, a very important question, whether the rule of law is to be referred to the rule of the best, the best individual. Typically again, he seems to answer the question from two different points of view, giving each perspective its due, its justice. He begins in many ways by appearing to defend Plato's view about the rule of the best individual. "The best regime," he says, "is not one based on written law." Law, and his reason seems to be something like this, law is at best a clumsy instrument, a clumsy tool because laws only deal with general matters and cannot deal with particular concrete situations. Furthermore, law seems to bind the hands of the statesmen and legislators who always have to be responding to new and unforeseen circumstances and yet at the same time Aristotle makes the case for law. The judgment of an individual, no matter how wise, is more corruptible whether due to passion or interest, or simply the fallibility of human reason than is law. He notes, as a practical matter, no one individual can oversee all things. Only a third party, in this case law, is capable of judging adequately. Again, he seems to give reasons and good reasons for both cases. So he, but he moves to question, should law be changed? Is law changeable? If so, how? And once again, he puts forward different arguments; in Book II, chapter 8, he compares law to other arts and sciences and suggests why sciences such as medicine and has exhibited progress, this should be true for law. The antiquity of a law alone is no justification for its usage. Aristotle seems to reject, you might say, Burkean conservatism long before the time. Antiquity or tradition alone is no justification, yet at the same time he seems to recognize that changes in law, even when the result is improvement, are dangerous. He writes, "It is a bad thing to habituate people to reckless dissolution of laws. The city will not be benefited as much from changing law as it will be harmed through being habituated to disobey the rulers." In other words he's saying, lawfulness, like every other virtue, is a habit, it is a habit of behavior, and the habit of destroying, disobeying even an unjust law will make people altogether lawless. This emphasis upon law is a constraint on human behavior. In many ways seems to introduce a strong element of conventionalism in Aristotle's thought. This is the view that justice is determined by laws, by customs, by traditions, that it is conventions, nomos in the broadest sense of the term that constitutes justice. As I indicated, there's also seems to be a certain degree of relativism associated with this since conventions vary from society to society. The standards of justice will seem to, again, be regime dependent and this seems to be entirely consistent with parts of Aristotle's anthropology. After all, if we are political animals by nature, then the standards of justice must derive from politics, a right that transcends society cannot be a right natural to man. Yet Aristotle's conception of our political nature seems to require standards of justice that are natural or right for us. Rule of law presupposes that there is a form of justice or right natural to us. But what is the Aristotelian standard of natural right or natural justice? Aristotle makes a surprising assertion; unfortunately, it's an assertion in a book you're not reading. A book, the Nicomachean Ethics, Book V, chapter 7, he says there that, " all natural right is mutable or changeable, all standards of natural justice are changeable." And by this he means that natural right is revealed not in general propositions or universal maxims, as for example, Immanuel Kant would argue later on, but in the concrete decisions of a community or its leaders about what is right or wrong. Natural right is mutable because different circumstances will require different kinds of decisions. So does this mean then that for Aristotle there are no universally valid standards of justice or right? That all that ends in circumstances that justice, like the good citizen is, as it were, regime dependent? Is this not to fall into the boundless field of Machiavelianism that declares right and wrong to be entirely relative to circumstance, context dependent, is that what Aristotle is saying? Not at all. Aristotle emphasizes the mutable character of natural right in part to preserve the latitude, the freedom of action required by the statesmen. Every statesman must confront new and sometimes extreme situations that require inventiveness and creative action. And in such situations where the very survival of the community may be at stake, we might call these emergency situations, the conscientious statesmen must be able to respond appropriately. Nine-eleven for example, a moral law that refused to allow the statesmen to protect the community in times of crisis would not be a principle of natural right, it would be a suicide note. To a considerable degree Aristotle, Aristotelian standards of natural right reside in the specific decisions, the concrete decisions of the ablest states; these cannot be determined in advance but must be allowed to emerge in response to new, and again, different and unforeseen situations. What is naturally right, what is right by nature in peace time, will not be the same as what is naturally right or right by nature in times of war. What is right in normal situations will not be the same as what is right in states of emergency. The statesmen in the Aristotelian sense is the one who seeks to return as quickly and efficiently as possible to the normal situation. This is what distinguishes Aristotle from Machiavelli, and all those later thinkers who take their bearings from Machiavelli. I'm thinking of thinkers like Hobbes, like Carl Schmitt, and Max Weber in the twentieth century. All of these thinkers take their bearings from the extreme situation, situations of civil war, of social collapse, of national crisis. The Aristotelian statesman will not be unduly affected by the occasional need to depart from the norm, whether this means this is spent in the case, to take an American case, the suspension of habeas Corpus, as Abraham Lincoln did in the, during the Civil War, or the regrettable need to engage in domestic espionage. But in any case, the Aristotelian statesman's goal will be restoration of the conditions of constitutional government and rule of law as quickly and again as efficiently as possible. On that grim note, I think I'll let you go and we will conclude Aristotle next time.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
10_New_Modes_and_Orders_Machiavellis_The_Prince_chaps_112.txt
Professor Steven Smith: Because it's a makeup class, we'll do something a little special. We're going to show a clip from a movie that I think is particularly appropriate. As we get into it, I'll tell you why. It's about a five-minute segment from the film called The Third Man. Has anyone ever seen it? Okay, maybe more than I thought. Just to set up the scene, for those of you have seen it and more for those of you who haven't, this is a film that was made in 1948 called The Third Man from a Graham Greene short story. It takes place in post World War II Vienna. And the clip we're going to see is the most famous part of the movie. It takes place of a conversation between two old friends. One of them, played by Orson Welles, is in a black market racket in Vienna and is making a living doing something very bad in the black market. In this scene, he's trying to convince an old school friend of his, played by Joseph Cotten, to join this thing. I should say the Orson Welles character has also faked his death, so no one except his immediate conspirators knows he's still alive. I should have said that. He's faked his death and here he has a scene with his old school friend, played by Joseph Cotten, and he's trying to convince him to come into his black market racket. So here comes The Third Man. Good scene. It's good because I think it conveys something of the flavor of Machiavelli's thought in Italy for 30 years under the Borgias. This is Machiavelli's time. Blood shed and murder and Leonardo da Vinci and the Renais sance. Under Switzerland for 500 years, peace and democracy. What did it produce? The cuckoo clock. I'm going to talk about that in a moment. I want to begin by talking about who was Machiavelli. How do we read The Prince? Machiavelli was a Florentine. To know that is to know virtually everything you need to know about him. I'm exaggerating but I do so to make a point. Florence was a republic. It was a city-state. And Machiavelli spent a good deal of his adult life in the service of the republic. Living in Florence, the center of the Renaissance at the height of the Renais sance, Machiavelli wished to do for politics what his contemporaries, like Leonardo da Vinci and Michelangelo, had done for art and sculpture. In other words, he hoped to revive in some way the spirit of the ancients of antiquity, but to modify it in the lights of his own experience. As he puts it in the dedication of his most famous book, he writes that this book The Prince "is a product of long experience with modern things and a continuous reading of the ancient ones." In Machiavelli, we have what we have come to call "modernity," given its first and most powerful expres sion. But Machiavelli was not an ordinary Florentine. He grew up under the rule of the Medici. That is to say, the first family of Florence, and lived to see them deposed by a Dominican friar by the name of Savonarola. Savonarola attempted to impose a kind of theocracy in Florence, a sort of Christian republic of virtue. But the Florentines, being what they were, rejected this idea and the rule of Savonarola was short-lived. In its place, a republic was re-established where Machiavelli occupied the office of secretary to the second chancery, a kind of diplomatic post which he held for 14 years from 1498 to 1512. After the fall of the republic and the return of the Medici to princely rule there, Machiavelli was exiled from the city, from politics to a small estate that he owned on the out skirts of the city. You can visit it today. It was here, from a place of political exile, that he wrote his major works--The Prince, the Discourses on Livy, and The Art of War. It was from here, also, that he wrote voluminous letters to friends seeking knowledge about politics. Machiavelli was a kind of political junkie, you could say, in things happening in Italy and else where. In one of these letters, a famous letter to his friend, a man named Francesco Vettori, he describes how he came to write his most famous book. I want to read a passage from that letter. It is also, I should say, on the basis of this letter, which is why I ask people from time to time to remove their caps in the class room, from the House of Study. This is the way Machiavelli approached study. "When evening comes,"--he writes, "When evening comes, I return to my house and go to my study. At the door, I take off my clothes of the day covered with muck and dirt and I put on my regal and courtly garments. And decently reclothed, I enter the ancient courts of ancient men, where received by them lovingly, I feed on the food that alone is mine and that I was born for. There I am not ashamed to speak with them and to ask them the reasons for their actions and they, in their humanity, reply to me. And for the space of four hours every night, I feel no boredom. I forget every pain. I do not fear poverty and death does not frighten me. I deliver myself entirely unto them. And because Dante says that to have understood without retention does not make knowledge, I have noted what capital I have made from their conversations and have composed a little work on principalities, where I delve as deeply as I can into reflections on this subject, debating what a principality is, of what kinds they are, how they are acquired, how they are maintained, and why they are lost. So there, Machiavelli gives us a sense of the seriousness with which he approached his subject, how he studied, and what it was he came to write. Let me just say from the beginning, The Prince is a deceptive book. What else would we expect from the name of a man that has become synonymous with deception? It is a work, The Prince, that everybody has heard of, perhaps has some preconception about. I was checking the web yesterday and I found a new book about Machiavelli, which none of these every fail to surprise me. This one is called The Suit: a Machiavellian Guide to Men's Fashion. Check it out. Who knows? Machiavelli's name is everywhere. It is applied to everything, from corporate executives now to men's fashion. Everybody knows or thinks they know what his work is about. His name, again, is synonymous with deception, treachery, cunning, deceit. Just look at the cover of your book. Look at his face. Look at his smile, really more of a smirk. He seems to be saying, "I know something you don't know." The difficulty with reading Machiavelli today is that we all think we already know what he knows and that is false. Machiavelli was a revolutionary. In the preface to his largest book, the Discourses on Livy, he compares himself to Christopher Columbus for his discovery of what he calls "new modes and orders." What Columbus had done for geography, Machiavelli claims he will do for politics. That is to say, discover an entirely new continent, a new world, so to speak, the new world of Machiavelli. Machiavelli's new world, his new modes and orders, will require, clearly, a displacement of the earlier one, of the previous one. And Machiavelli wrote, of course, the dominant form of political organization was the empire or, to speak more precisely, the Christian empire. The Holy Roman Empire, as it was known in the time of Machiavelli, was the successor to the ancient Roman state, the older Roman Empire. Both of these empires had aspired to a kind of universality. And this universality was given expression in Dante's famous treatise, De Monarchia, of monarchy, that set out a model for a universal Christian state, based on the unity and oneness of the human race under a Christian ruler. Machiavelli rejected this idea of the empire and harked back, instead, to the model of republican Rome. And there is much in his writing that recalls the sort of extraordinary virtues and capacities of the citizens of the ancient republican city-state. But you might say just as Machiavelli broke with the dominant model of Christian universalism, so too did he reject the ancient model of the small, autonomous republican state. He makes this clear in a famous passage at the beginning of chapter 15 of The Prince. And I just want to read that passage, as well. Here, Machiavelli says, "I depart from the orders of others. I depart from their modes," he says. "But since it is my intent to write something useful to whoever understands it, it has appeared to me more fitting to go directly to the effectual truth of things than to the imagination of it. And many have imagined," --one thinks here of Plato, perhaps, but also to Christianity--"Many have imagined republics and principalities that have never been seen or known to exist in truth. For it is far from how one lives to how one should live. That he who lets go of what is done for what should be done learns his ruin rather than his preservation." In other words, no Platonic cities in speech. No Augustinian cities of God. We will only look, he says, to the effectual truth of things. The effectual truth of the matter, not the imagination of it or the utopia of it. That passage is often taken to be, the beginning of chapter 15, the essence of Machiavelli and realism, a kind of Realpolitik, as it were. His appeal from the "ought" to the "is," to take one's bearings again, from the effectual truth of things. This seems to be, in many ways, the essence of his teaching. To be sure, Machiavelli focuses on key aspects of political reality which are often ignored by thinkers like Plato and Aristotle. Murders, conspiracies, coup d'état, these are the kinds of political phenomena he is interested. He seems to be more interested in the evils that human beings do than the goods to which they aspire. You might even say that Machiavelli takes delight in demonstrating, much to our chagrin, the space between our lofty intentions and the actual consequences of our deeds. Yet, it would seem to me there is more to Machiavelli than the term "realism" connotes, although that is certainly important. In this passage, Machiavelli announces his break, indeed his repudiation of all those who have come before, all those who have come before. He both replaces and yet reconfigures according to his own lights, elements from both the Christian empire and the Roman republic, to create a new form of political organization distinctly his own. What we might call today the modern state. Machiavelli is the founder, the discoverer, the inventor of the modern state. This modern, secular, sovereign state was refined and developed in the decades and centuries after Machiavelli in the writing of Hobbes, of Locke, of Rousseau, to say nothing of contemporary twentieth-century writers from both the right and the left--Max Weber, Karl Schmidt, to a man, an Italian philosopher named Antonio Gramsci, who was the author of a book interestingly called The Modern Prince, based on Machiavelli himself. Machiavelli's state itself has universalist ambitions, in many ways, much like its Christian and Roman predecessors. But this is a state, he believes, that has now been liberated or emancipated from Christian and classical conceptions of virtue. The management of affairs is left to those people who he calls princes, which in the Machiavellian usage designates a new kind of political founder or leader endowed with a new species of ambition, love of glory, and elements of prophetic authority that we might call charisma. But just what was the nature of the revolution contemplated by our founder, Machiavelli, the founder of modern political science? Consider, just for a moment, the title and dedication of the book. The Prince appears, on its surface, to be a most conventional work. It presents itself in the long tradition of what has come to be called the mirror of princes. Books that give a kind of guide to the dos and don'ts of princely behavior. Fair enough. It seems to go back a long, long time. And the appearance of conventionality is supported by the opening words of the book in his dedicatory letter. The first words or first line out of his mouth or the first lines are "it is customary," he says. It is a work intended to ingratiate himself to Lorenzo de Medici, the man to whom the work is dedicated, a customary prince, a traditional prince who has just regained his power. But look again. Consider the structure of the first three chapters. "All states, all dominions that have held and do hold empire over man are either republics or principalities," he says in the opening sentence of chapter 1. Having distinguished two, and only two, kinds of regimes, republics and principalities, as the only ones worth mentioning, he goes on to distinguish two kinds of principalities. There are hereditary principalities, like those currently run by Lorenzo, which acquire their authority through tradition and hereditary bloodlines. Then he says there are new princes and new principalities. Machiavelli then asserts that his book will deal only with principalities, leaving, he says, the discussion of republics for elsewhere, what one assumes his Discourses of Livy, which he was already writing by this time. But then Machiavelli goes on to tell the reader that the exclusive subject of this book will be the new prince. In other words, not Lorenzo at all, but precisely princes who have or will achieve their authority through their own guile, their own force, or their own virtù, to use the famous Machiavellian term that I want to talk about later. The true, in other words, recipient of this book must be necessarily the potential prince. That is to say, someone with sufficient political audacity to create their own authority, who has not simply received it from the past, but to create their own authority. Maybe one could even say Machiavelli's prince is, in a way, the first truly self-made man. So what, then, is the character of this new prince and how does he differ from more conventional modes of political authority? In one of the most famous chapters of the book, chapter 6, entitled, "Of New Principalities that are Acquired Through One's Own Arms and Virtue," there is that word again, virtù, one's own arms and virtue, Machiavelli discusses the character of the modern prince, the new prince. "A prudent man," he writes, "should always enter upon the paths beaten by great men and imitate those who have been most excellent, so that if his own virtue does not reach that far, it at least is in the odor of it." We at least come within, you might say, sniffing distance of their greatness. "One should do," he says, "what archers do when attempting to reach a distant target, namely, aim your bow high, knowing that the force of gravity will bring the arrow down." In other words, set your sights high, knowing you will probably fall short. "So who are the greatest examples," he says, "of princely rule that the prudent man"--interesting choice of words, "the prudent man"--"should imitate?" And here, Machiavelli gives a list of those heroic founders of peoples and states--Moses, Cyrus, Romulus, Theseus, and so on. "As one examines their actions and lives," he writes, "one does not see that they had anything else from fortune than the opportunity which gave them the matter enabling them to introduce any form they please. Notice in that sentence, he uses those Aristotelian terms, "form" and "matter" that we spoke about in relation to the Aristotelian regime. "They had nothing else from fortune," he says, again, "than the opportunity," the occasion, that "gave them the matter enabling them to introduce any form they please." In short, Machiavelli claims these were founders who created, in a way, ex nihilo, out of nothing. They only had the occasion in a kind of formless matter upon which they could adopt and impose any form they took. And they had, of course, the strength of mind, as well as the audacity and cunning, to take advantage of this situation. Such opportunities, he writes, such occasions, made these men successful. And their excellent virtue enabled the opportunity to be recognized. Hence, their fatherlands were ennobled by it and they became prosperous. They took advantage of their opportunity, seized their opportunity and imposed their own form upon it. And it is here that Machiavelli introduces his famous distinction between armed and unarmed prophets. "All the armed prophets," he says, "conquered and the unarmed were ruined." This seems to be and is, clearly, a kind of classic statement of sheer Machiavellian power politics. "All politics grows out of the barrel of a gun," as a famous twentieth-century Machiavellian once put it. The armed prophets conquer, the unarmed lose. But there seems to be more to it than this. Machiavelli compares the prince to a prophet. Why does he use that language? What is a prophet? The most obvious answer is a person to whom God speaks. Machiavelli's armed prophets may not be religious figures and they are not necessarily recipients of divine knowledge, but they seem to be, at least on his account, people of exceptional personal qualities that allow them to bring laws, to be law bringers, lawgivers, shapers of institutions and also reformers of opinions that govern our lives. Machiavelli's armed prophet is more than just a gangster, like Orson Welles in that part. He is a teacher and a kind of educator as well. You might even think in your class, in your sections, how or in what ways does Machiavelli's armed prophet differ in important ways both from Plato's philosopher king, as well as Aristotle's notion of the megalopsychos as the sort of magnanimous statesman. Although this kind of talk about "armed prophets always win" is characteristic of Machiavelli, he likes this kind of tough talk. He clearly recognizes that there are clear exceptions to his rule about armed prophets. Who comes to mind most vividly? Who, in other words, is not present in Machiavelli's list of great prophets that one should imitate? Student: Jesus. Professor Steven Smith: Yes. Most obvious, perhaps, certainly to his contemporaries, Jesus, who triumphed through words and teaching alone. He had no troops. He had no arms. He established a religion, first a sect, you might say, then a religion, then eventually an empire, the Holy Roman Empire, that was established in the name of that teaching. Words may well be a powerful weapon, as powerful as a gun. Then you might say, "What is Machiavelli himself?" Who is Machiavelli but an archetypal, unarmed prophet? He has no troops. He has no territory. He controls no real estate. He's been banished, yet he is clearly attempting to conquer, comparing himself to Columbus, to conquer in large part through the transformation of our understanding of good and evil, of virtue and vice. In other words, to make people obey you, you must first make them believe you. Machiavelli's prophetic prince, in other words, must have some of the qualities of a philosopher, as well as a religious reformer trying to reshape and remold human opinion, especially opinion over, as we said, good and evil, just and unjust. What does this reformation, so to speak, or transformation consist of? We might even call this Machiavelli in the garden of good and evil, midnight in the garden of good and evil for Machiavelli. One point often attributed about Machiavelli is that he introduced a new kind of immoralism into politics. In that famous chapter, chapter 15, he says he sets out to teach the prince how not to be good. A striking formulation. He will teach the prince how not to be good. And in perhaps the most important book on Machiavelli ever written, the author of that book declared Machiavelli to be a teacher of evil. You might want to think about that. A teacher of evil. Is that what Machiavelli was? Questions of good and bad, virtue and vice, appear on virtually every page of The Prince. He is not simply a teacher of political pragmatism, of how to adjust means to fit the ends. He seems to be offering nothing short of a comprehensive revolution, transformation. If you want to use the Nietzschean language, "transvaluation" of our most basic vocabulary about good and evil. Machiavelli doesn't reject the idea of the good. Rather, he redefines it. He is continually speaking, and in fact I would suggest on virtually every page of the book, he is continually speaking the language of virtue. His word "virtù," which a word that retains the Latin word for the word "man," vir, wir, man, and virtù, a word that is perhaps best translated or, by our word, "manliness." What distinguishes Machiavelli's use of this language of virtù, manliness, is that he seeks to locate it in certain extreme situations, such as political foundings, changes of regimes, wars, both of domestic and foreign kinds. What distinguishes Machiavelli from his predecessors, in many ways, is his attempt to take the extraordinary situation, the extreme situation, again, the extremes of political founding, conspiracies, wars, coups, as the normal situation and then makes morality fit that extreme. His examples are typically drawn from situations of human beings or polities in extremes where the very survival or independence of a society is at stake. In those situations, you might say, and only in those situations, is it permissible to violate the precepts of ordinary morality. In those situations one must learn, as he says, how not to be good, to have to violate the conventions and cannons of ordinary morality. Machiavelli takes his bearings from these extreme states of emergency and in his own way, seeks to normalize them, to present them as the normal condition of politics. Machiavelli's preference for these extreme situations expresses his belief that only in moments of great crisis, where the very existence of a state is at risk, does human nature truly reveal itself. We finally or fully understand what people are only in the most extreme situations. The paradox that, you might say, runs throughout all of Machiavelli's morality is that the very possibility of virtue grows out of and, in fact, is even dependent upon the context of chaos, violence, and disorder that always threatens the political world. Think of it. Think of many of our great political models or heroes. What would the Duke of Marlborough have been without Louis XIV? What would Washington have been without George III? What would Lincoln have been without the slave interest? What would Churchill have been without Hitler? In other words, his point is that good is only possible because of the prior existence of bad. Good is founded upon evil. And even the greatest goods, the founding and preservation of cities, often require murder. What was Romulus' murder of Remus or Cain's murder of Abel, but the kind of murder that founded, at the basis of the founding of cities and civilizations? One thinks, in a way, of Welles' line in The Third Man when he looks down from above and says, "If I gave you 20,000 pounds for every dot that stopped moving, would you really tell me to keep my money?" It requires, for Machiavelli, the founding of regimes requires that kind of cold and cruel calculation. Of course, it's being used in the movie just to support a criminal enterprise, not the founding of a city. We might investigate that as well. But Machiavelli does not deny that in ordinary terms, in what we might call times of normal politics, the ordinary rules of justice prevail. He also shows, however, that normal politics is, itself, dependent upon extraordinary politics, periods of crisis, anarchy, instability, revolution, where the normal rules of the game are suspended. It is in these times, you might say, when individuals of extraordinary virtue and capacity, prophetic qualities, as he calls it in chapter 6, are most likely to emerge. While the Aristotelian statesmen, just to make a contrast for a moment, is most likely to value stability and the means necessary to achieving it, the Machiavellian prince seeks war, because it is only, again, in the most extreme situations that one can prosper and be prosperous. Think about the lines again from the movie. "For 30 years under the Borgias, violence, murder, terror, bloodshed. But what did it produce? Greatness of an unprecedented type. Stability, democracy, brotherly love, peace. What does that produce? Mediocrity, the cuckoo clock." There might be a little more of Nietzsche suggested in that, than Machiavelli, but I think the Machiavellian overtones are very evident. Consider just the following. Every child, every one of you, every one of us was brought up to know that one must never do wrong, even if good consequences are seen to follow. It is never right to give bad examples to others, even if one expects good to come from it. Yet, Machiavelli breaks these rules about not giving bad examples. Virtue is not associated with the classical conceptions of moderation, of justice, of self-control over the Christian virtues of faith, hope, and charity. Virtue means for him a kind of manly self-assertion, audacity, ruthlessness, a reliance on one's own arms and calculated use of cruelty to achieve one's ends. The model of Machiavellian virtù is the Renaissance statesman, in general, Cesare Borgia. It's very interesting that Orson Welles made a movie, not so often seen today, about Cesare Borgia. I want to leave you with reading one passage from The Prince, chapter 7, in which Machiavelli illustrates the kind of virtù Cesare represented and that he wants to recommend for those who follow him. "Once the duke," that's Cesare himself--"Once the duke had taken over the Romana," an area outside of Florence, "he found it had been commanded by impotent lords who had been readier to despoil their subjects than to correct them and had given their subjects matter for disunion, not union. So Cesare put there," he says "Messer Ramiro d'Orco, a cruel and ready man to whom he gave the fullest power." So Cesare set up as a lieutenant of his to impose order on this area and to whom he delegated the fullest responsibility. "In short time," he goes on, "Ramiro reduced it to peace and unity with the very greatest reputation for himself. Then the duke judged that such excessive authority was not necessary, because he feared it might become hateful and he set up a sort of a civil court in the middle of the province with the most excellent president where each city had its advocate. And because he knew that the past rigors had generated some hatred for Ramiro, to purge the spirits of that people and to gain them entirely to himself, he wishes to show that if any cruelty had been committed, this had not come from him, from Cesare, but from the harsh nature of his minister. And having seized this opportunity," that language, seized the occasion, seized this opportunity, "he had emplaced one morning in the piazza in two pieces, with a piece of wood and a bloody knife beside him. He had him cut in two. The bloody knife and piece of wood beside him. "The ferocity of this spectacle," Machiavelli concludes, "left the people at once satisfied and stupefied." That, of course, is Machiavelli's virtù, princely virtue, what you do to leave the people satisfied and stupefied. What we might call today shock and awe. Okay, next week we'll continue this learned man.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
5_Philosophers_and_Kings_Platos_Republic_IIIIV.txt
Professor Steven Smith: Today Republic, Act 2. And what I want to do is continue with the account of the various figures, various persons who populate, who inhabit this dialogue, and who they are, what they represent, and how they contribute to the argument and the structure of the work as a whole. Last time, and I won't repeat this, last time we talked briefly at the end of class about Cephalus, and Socrates' treatment of Cephalus, the embodiment of convention, the embodiment of Athenian opinion in the way in which Socrates as it were chases Cephalus out of the dialogue, out of the conversation. We never hear from him again. And the speakers are able, presumably, to pursue the audacious arguments that will appear in the rest of the book without the oversight of the head of the household, the embodiment of conventional opinion. And Socrates next pursues this discussion with the son of Cephalus, Polemarchus, the man who first had Socrates approached on the Piraeus. Polemarchus is described as the heir of the argument as well as the, to be sure, the heir of the family fortune. Polemarchus is what the Greeks would call a "gentlemen." Let us just say he is a person willing to stand up for and defend his family and friends. I don't mean necessarily by a "gentlemen" somebody who holds the door for others, or so on, but somebody who stands up for his family and friends in the way that he does. Unlike his father however, Polemarchus shows himself concerned not just with the needs of the body as Cephalus represented, but Polemarchus is concerned to defend the honor and safety of the polis. He accepts the view that justice is giving to each what is owed, but he interprets this to mean that justice means doing good to your friends and harm to your enemies. Justice, we might say, is a kind of loyalty, it is a kind of loyalty that we feel to members of a family, to members of our team, to fellow students of a residential college, and the kind of loyalty we feel to a place like Yale as opposed to all other places. That is to say, Polemarchus understands justice as a kind of patriotic sentiment that citizens of one city or one polis feel for one another in opposition to all other places. Justice is devotion to one's own. And one's own is the good for Polemarchus. One's own is the just. But Socrates challenges Polemarchus on the grounds that loyalty to a group, any group, cannot be a virtue in itself, and he trips Polemarchus up with a very, in many ways, familiar Socratic argument "Do we ever make mistakes?" he asks Polemarchus. "Isn't the distinction between friend and enemy based on a kind of knowledge, on a perception of who is your friend and who is your enemy? Have we ever mistaken a friend for enemy?" The answer seems to be, "Of course we have." We all know people who we thought to be our friends but we found out that they were talking behind our backs, or that they were operating to deceive us in some way or another. Of course, it's happened to everyone. "So how can we say that justice means helping friends and harming enemies," Socrates asks, "when we may not even be sure who our friends and our enemies really are? Why should citizens of one state, namely one's own have any moral priority over the citizens of another state when, again, we don't know them and we may well be mistaken in our assumption that they are enemies or friends? Isn't, in other words such an unreflective attachment to one's own bound to result in injustice to others? Socrates seems to be asking Polemarchus. Once again, in many ways, we see Socrates dissolving the bonds of the familiar. At no other point in the Republic, I think, do we see so clearly the tension between philosophical reflectiveness on the one hand in the sense of camaraderie, mutuality and esprit de corps necessary for political life on the other. Socrates seems to dissolve those bonds of familiarity, loyalty and attachment that we all have by saying to Polemarchus, "How do we know, how do we really know the distinction between friend and enemy?" But Polemarchus seems to believe that a city can survive only with a vivid sense of what it is. Of what we might say, what it stands for, and an equally vivid sense of what it is not, and who are its enemies. Isn't this essential for the survival of any state, of any city? To know who its friends and enemies are? Who its challenges are? Socrates' disillusion of that very framework, challenges, it seems to me, the very possibility of political life by questioning the question or the distinction between friend and enemy. Although Polemarchus, like his father, is reduced to silence, it is notable that his argument is not defeated. Later in the Republic you will see, not that much later even, Socrates will argue that the best city may be characterized by peace and harmony at home, but this will never be so for relations between states. This is why even the best city, even Kallipolis will require, as he spends a great deal of time discussing, will require a warrior class, a class of what he calls "auxiliaries." War and the preparation for war is an intrinsic part of even the most just city. Even the Platonic just city will have to cultivate warrior citizens who are prepared to risk life in battle for the sake of their own city. So in many ways, it seems that Polemarchus' argument, while apparently refuted in Book I, is rehabilitated and re-emerges in its own way later in the dialogue. And we might want to think about this because it is an argument that is very important to contemporary twentieth century--important twentieth- century political theorist by the name of Schmitt who made the distinction between what he called the friend and the enemy, you remember, are central to his understanding of politics. This is an argument that comes from Polemarchus in Book I of the Republic. Polemarchus is dispatched in one way or another, and this creates the opportunity for the longest and in many ways most memorable exchange in Book I, and perhaps even the Republic as a whole, the exchange with Thrasymachus who represents a far more difficult challenge in his own way than either of the first two speakers. In many ways, because Thrasymachus could be seen as Socrates' alter-ego in some way, his sort of evil twin. He is, how to put it? He is the Doctor Moriarty to Socrates' Sherlock Holmes. You know, the evil doppelganger in some way. Thrasymachus' is a rival of Socrates in many respects; he also like Socrates is a teacher clearly. He is an educator. He claims to have a certain kind of knowledge of what justice is, and claims to be able to teach it to others. He is teaching a kind of, we will find out, a kind of hard-headed realism that expresses disgust at Polemarchus' talk about loyalty and friendship and the like. "Justice," he asserts, "is the interest of the stronger." Every polity of which we know is based upon a distinction between the rulers and the ruled. Justice consists of the rules, that is to say, that are made by and for the benefits of the ruling class. Justice is nothing more or less right than what benefits the rulers, the rulers who determine the laws of justice. Thrasymachus is, of course even for us today, a familiar kind of person. He is the intellectual who enjoys bringing, you might say, the harsh and unremitting facts about human nature to light, who enjoys dispelling illusions and pretty beliefs. He's the one who probably would be the first to tell you there is no Santa Claus. He is that kind of hard-boiled realist. No matter how much we may dislike him in some ways, one has to admit also there maybe a grain, if not more than a grain of truth in what he seems to be saying. And what he seems to be saying is this: we are beings who are first and foremost dominated by a desire for power. This is what distinguishes, you might say, the true man, the real man, the alpha male you might say, from the slave. Power and domination are all we truly care about. And when we get later in this semester to Thomas Hobbes, remember Thrasymachus. I'll just say that for now. Remember Thrasymachus when we get to Hobbes. Power and domination are all we care about. And what is true of individuals is also true for collective entities, collective nouns like states and cities. Every polity seeks its own advantage against others, making relations between states a condition of unremitting war of all against all. In the language, if I can switch to the language of modern economics, one could say that for Thrasymachus politics is a zero-sum game. There are winners and there are losers, and the more someone wins that means the more someone else will lose. And the rules of justice are simply the laws set up by the winners of the game to protect and to promote their own interests. It didn't take Karl Marx to invent, or to discover that insight, that the rules of justice are simply the rules of the ruling class. That comes straight out of Thrasymachus, Book I of the Republic. Well, how to respond? And again, Socrates challenges Thrasymachus with a variation of the argument that he used against Polemarchus. That is to say "Do we ever make mistakes?" That is to say, it is not self-evident, or it is not always intuitively obvious what our interests are. If justice is truly in the interests of the stronger, doesn't that require some kind of knowledge, some kind of reflection on the part of those in power to know what is really and truly in their interest? People make mistakes and it is very possible to make a mistake about your own interests. And of course, Thrasymachus has to acknowledge this, of course the rulers make mistakes, and he tries to invent an argument that if a ruler makes a mistake, he's not really a true ruler. The true ruler is the person who both acts on his own interest and of course knows what those interests are. But the point that he admits is all, in a sense, that Socrates needs; justice is not power alone, justice requires knowledge. Justice requires reflection. And that is of course at the core of the famous Socratic thesis, that all virtue is a form of knowledge, all the virtues require knowledge and reflection at their basis. But much of the exchange with Thrasymachus turns on the problem of what kind of knowledge justice involves, and justice is a kind of knowledge. If justice equals self-interest and self-interest requires knowledge, well what kind of knowledge is that? Thrasymachus contends that justice consists of the art of convincing people to obey the rules that are really in the interests of others, the interests of their rulers. Justice, in other words, for Thrasymachus is a kind of sucker's game; obeying the rules that really benefit others largely because we fear the consequences of injustice. Justice is really something only respected by the weak who are fearful of the consequences of injustice. Again, the true ruler, in some ways, is one, Thrasymachus believes, who has the courage to act unjustly for his own interest. "The true ruler," he says "is one who is like a shepherd with a flock, but he rules not for the benefit of the flock but, of course, for his own interests, the good of the shepherd." Justice, like all knowledge, is really a form, again, of self-interest. And so one can ask, "Is Thrasymachus wrong to believe this?" And I realize I'm moving over this very quickly, but is Thrasymachus wrong to believe that? Socrates wins the argument in Book I with a kind of, you might even say, sleight of hand. Both he and Thrasymachus believe that justice is a virtue, but Socrates says, "What kind of virtue is it to deceive and fleece other people?" Thrasymachus is forced to admit that the just person is a fool, Thrasymachus believes, is a fool for obeying laws that are not beneficial to him. But the best life, Thrasymachus believes, it doing maximum injustice to others, doing whatever you like. And with that realization, we see a very dramatic moment in Book I, even in the book as a whole. Thrasymachus blushes. He blushes when he realized that he has been defending the claim not that justice is a virtue, but that justice is something that is really a form of weakness. Thrasymachus himself seems to be embarrassed by his defense of the tyrannical life, of the unjust life. The suggestion Plato seems to be making by making Thrasymachus blush is, despite all of his tough talk, that he's not as tough as he appears to be, as he wants to think of himself to be. He's shamed by the fact that he has been defending injustice and the tyrannical way of life. And so it appears, the three conversations end, Book I ends with uncertainty about what justice is. We have had three views of Cephalus, Polemarchus, and Thrasymachus. They have all been refuted, but no clear alternative seems to have emerged. Certainly Socrates has not really proposed an alternative to Thrasymachus in his exchange with him; he has only, as it were, forced Thrasymachus to see that the logic of his ideas, the logic of his argument that justice is in the interest of the stronger, is a defense of tyranny, and is a defense of the unjust way of life. So all of Book I is really a kind of warm up for what follows in the rest of the book. We find out presumably what justice is. Until that point, we have no reason to really give up on our current existing ideas about what justice is. And this is where the two most important figures of the Republic begin to make their voices heard. Those are Glaucon and Adeimantus. Glaucon tells Socrates that he is dissatisfied with the refutation of Thrasymachus, and so should we. Thrasymachus has been shamed, he has been forced to see where the logic of his argument takes him, but that is not the same thing as being refuted. Thrasymachus is really, as it turns out, a kind of girly-man who is ashamed to be seen defending the unjust life. "But why should we be ashamed to praise injustice?" Glaucon challenges Socrates. "It's not enough to show that justice is wrong," Glaucon says. "What we need is to hear why justice is good," or more precisely to hear justice praised for itself. "Is there, in your opinion," Glaucon asks Socrates, "a kind of good that we would choose because we delight in it for its own sake?" 358A. Is there a kind of good that we delight in for its own sake? And this is where the rubber hits the road. Who is Glaucon? Glaucon and Adeimantus are the brothers of Plato, and other than their appearance in this book, there is no historical record left about them. But Plato has given us enough. In the first place, they are young aristocrats, and Glaucon's desire to hear justice praised for its own sake indicates something about his scale of values. It would be vulgar, he believes, to speak of justice, or any virtue in terms of material rewards or consequences. He does not need to hear justice praised for its benefit, he's indifferent to the consequences. Rather, he claims that he wants to hear justice defended the way that no one has ever defended it before. The brothers desire to hear justice praised for itself alone, and that seems to be expressive of their own freedom from mercenary motives and incentives. It reveals to us something about their idealism and a certain kind of loftiness of soul. And certainly the brothers, we find out, are not slouches. They are not slouches at all. Although it is easy to remember that later in the dialogue most of their contribution seems to be of the form of "Yes Socrates, no Socrates," they seem to be rather passive interlocutors. Their early challenges to Socrates show them to be potential philosophers. That is to say the kind of persons who might one day rule the city. Of the two, Glaucon seems to be the superior. He is described as the most courageous, which in that context means the most manly, the most virile, and later Socrates admits that he has always been full of wonder at the nature of the two brothers. And at 368a, he cites a line of poetry, you'll remember, written about them for their distinction in battle, they have been in war, they have been tested in war obviously. They are also, and we see this from their relationship between one another, and the way they speak to one another, they are also highly competitive, super achievers. A little bit like some of you perhaps. There is quite a bit of jousting between them that you need to be attentive too. And each proposes to Socrates a test that he will have to pass in order to prove the value of justice and the just life. Glaucon goes on to rehabilitate the argument of Thrasymachus in many ways, in a more vivid and a more expressive way than Thrasymachus did himself. Glaucon tells a story, you'll remember, a story that he modifies from the historian Herodotus, a story about a man named Gyges who possessed a magic ring that conferred on him the power of invisibility. Who has not wondered what we would do if we had this power, the power of invisibility? Gyges, in Glaucon's retelling of the story, Gyges uses this ring to murder the king and to sleep with his wife, and to set himself up as king. What would you do if you had this power, the power of this magic ring, where you could commit any crime, indulge any vice, commit any outrage and be sure you could always get away with it? Why if you could do that would you wish to be just at the same time, or wish to be just instead of that? This is the challenge that Glaucon poses to Socrates. Why would someone with absolute power and complete immunity to punishment, why would they prefer justice to injustice? "Tell me that Socrates," Glaucon asks. "If justice truly is something praiseworthy for itself alone, then Socrates should be able to provide an answer that will satisfy Glaucon's retelling of the story of Gyges, that is certainly a very tall order. And that is where the brother, Adeimantus, joins in. Adeimantus has a somewhat different set of concerns. He has heard justice praised his whole life from parents and from poets and from other authorities, but for the most part, he has only heard justice praised again for the benefits justice confers both in this life and the next. Honesty is the best policy, we've heard Cephalus being concerned about returning to others what you owe as a way of pleasing the gods in the afterlife, and Adeimantus rightly takes this kind of argument to mean that justice is simply a virtue for the weak, the lame and the unadventurous, if you were only concerned with the consequences. A real man does not fear the consequences of injustice. Rather, Adeimantus' concern, and he gives a very revealing image of what he takes justice to be, is with an image of self-guardianship, or self-control. He tells us at 367a that each would be his own god. In other words, we should not care what people say about us, but we should be prepared to develop qualities of self-containment, autonomy and independence from the influence that others can exercise over us. "How can I develop those qualities of self-guardianship or self-control?" he asks Socrates. And who has not felt that way before? The two brothers desire to hear justice praised for itself, Glaucon, and to live freely and independently, Adeimantus. And that shows to some degree I think, their own sense of alienation from their own society. Or if I can put the case for them slightly anachronistically, these are two sons of the upper bourgeois who feel degraded by the mendacity and hypocrisy of the world they see around them. And anyway, what person with any sensitivity to greatness has not felt this way at one time or another? The two are open to persuasion, to consider alternatives, perhaps even radical alternatives, to the society that has nurtured them. They are, to put it another way perhaps, not only potential rulers and potential philosophers, they may also be potential revolutionaries, and the remainder of the book is addressed to them and of course people like them. But the speeches of Glaucon and Adeimantus, you might say the circle around Socrates is effectively closed. He knows he will not be returning to Athens that evening, and he proposes to the two brothers and those listening to the conversation a kind of thought experiment that he hopes will work magic on the two. "Let us propose," he says, "to watch a city coming into being in speech." Let us create a city in speech. "It is easier," he says, "not to view justice microscopically in an individual, but rather let's view justice as it were through a magnifying glass." Let's view justice in the large sense. Let us view justice in a city in order to help us understand what it is in an individual. And this idea that the city is essentially analogous to the soul, that the city is like the soul, is the central metaphor around which the entire Republic is constructed. It seems to be presented entirely innocuously, no one in the dialogue objects to it, yet everything else follows from this idea that the city, the polis, is in the central respect like an individual, like the soul of an individual. What is Socrates trying to do here, and what is that metaphor, that central metaphor, what function does it serve within the work? To state the obvious, Socrates introduces this analogy to help the brothers better understand what justice is for an individual soul. The governance of the soul, Adeimantus' standard of self-control, must be like the governance of a city in some decisive respects. But in what respects? How is a city like a soul and in what respect is self-governance, the control of one's passions and appetites, in what respect is self-control like the governance of a collective body? Consider the following example: when we say that so and so is typically American, or typically Taiwanese for example, we mean that that person expresses certain traits of character and behavior that are broadly representative in some way of the cross section of their countrymen. Is this a useful way to think? More specifically, what does it mean to say that an individual can be seen as magnified in his or her country, or that one's country is simply the collective expression of certain individual traits of character? That seems to be what Socrates is suggesting. Right, that's what he's getting at. One way of thinking about the metaphor of city and soul together is to think of it as a particular kind of causal hypothesis, about the formation of both individual character and political institutions. In this reading of the city/soul analogy as a kind of causal relation, maintains the view that as individuals we both shape and determine the character of our societies, and that those societies in term shape and determine individual character. The city and soul analogy could be seen then as an attempt to understand how societies reproduce themselves, and how they shape citizens who again in turn shape the societies in which they inhabit. That seems to be one way of making sense of the city/soul hypothesis, but again it doesn't seem to answer the question in what way are cities and individuals alike. To take the American case for example, does it mean that something like the presidency, the congress and the court can be discerned within the soul of every American citizen? That would be absurd to think that obviously. I mean, I think that would be absurd. Maybe you want to argue it and we could have a discussion, but it might mean that American democracy, or democracy of any kind, helps to produce a particular kind of democratic soul. Just like, you might say, the old regime in France, the old aristocratic society existing before the revolution, tended to produce a very different kind of soul, a very different kind of individual. Every regime will produce a distinctive kind of individual, and this individual will come to embody the dominant character traits of the particular regime. The remainder of the Republic is, again, devoted to crafting the regime that will produce a distinctive kind of human character, and that of course is why the book is a utopia. There has never been a regime in history that was so single-mindedly devoted to the end of producing that rarest and most difficult species of humanity called simply philosopher. So, city and soul. That leads to our next topic that I want to pursue for the remainder of the class, the reform of poetry and the arts. Socrates' city speech proceeds through several stages. The first stage proposed by Adeimantus is the simple city, what he calls the city of utmost necessity. That is a city limited to the satisfaction of certain basic needs. The primitive or simple city, the city of utmost necessity, again it expresses the nature of Adeimantus' own soul, there is a kind of noble simplicity in him that treats subjects as bodies or creatures of limited appetites. The simple city is little more than a combination of households designed for the sake of securing one's existence. And at this point, and you can hear his brother chastising him, at this point Glaucon retorts that it seems as if Adeimantus has created a city only fit for pigs, a city of pigs. Are we only such that we want to feed at a common trough? Is there nothing more to politics than that? And Glaucon says, "Where are the luxuries? Where are the relishes," he asks. "Where are the things that make up a city?" And hereto Glaucon's city expresses his own tastes and his own soul. The war-like Glaucon would preside over what Socrates calls a feverish city, one that institutionalizes honors, competitions and above all war. If Adeimantus, again, expresses the appetitive part of the soul, Glaucon represents the quality that Plato calls spiritedness, or thumos in Greek. Spiritedness is the central, psychological quality of the Republic. The entire thrust of the book is devoted to the taming of spiritedness, and to the control of spiritedness. Spiritedness is that quality of soul that is most closely associated with the desires for honors, fame and prestige. It is a higher order psychological quality. It seeks distinction, the desire to be first in the race of life and lead us to seek to dominate others. We all know people of this sort, do we not? And we all to some degree embody this quality in ourselves. It is the quality that we associate with being a kind of alpha personality. This is the issue for Socrates, how to channel this wild and untamed passion of spirit or heart, how to channel this to some kind of common good. Can it be done? How can we begin the domestication of the spirited Glaucon? The rest of the book is to some degree about taming, asking the question whether Glaucon can be tamed. And it is here that Socrates turns to his first and perhaps even his most controversial proposal for the establishment of the just city. "The creation of the just city can only begin," he says, "with the control of music, poetry and the arts." And this is where Plato's image as an educator drives. The first order of business for the founder of a city, any city, is the oversight of education. And his proposals for the reform of poetry, especially Homeric poetry, represent clearly a radical departure from Greek educational practices and beliefs. Why is this so important for Socrates? Ask yourself, if you were founding a city, where would you begin? Socrates' argument seems to be something like this: it is from the poets and I mean that in the broadest sense of the term, myth makers, storytellers, artist, musicians, today we might say film and television producers, it is from these people that we receive our earliest and most vivid impressions of heroes and villains, gods and the afterlife. These stories, the stories we hear from earliest childhood on, shape us in some very meaningful sense for the rest of our lives. And the Homeric epics were of course for the Greeks what the Bible was for us. Maybe even is in some communities. The names of Achilles, Priam, Hector, Odysseus, Ajax, these would have been just as familiar and important to the contemporaries of Plato as the names of Abraham, Isaac, Joshua and Jesus are for us. Plato's critique of Homeric poetry in the Republic is two-fold; it is both theological and political. Maybe you might even say following Spinoza, that this is the core of Plato's theological political treatise here. The theological critique is that Homer simply depicts the gods as false, as fickle, and inconstant. He presents them as beings who are unworthy of our worship. More importantly, the Homeric heroes are said to be bad role models for those who follow them, they are shown to be intemperate in sex, overly fond of money, into these vices Socrates adds cruelty and disregard for the dead bodies of one's opponents. The Homeric heroes are ignorant and passionate men full of blind anger and desire for retribution. How could such figures possibly serve as good role models for citizens of a just city? And Socrates' answer is, of course, the predation of poetry and the arts in Books II and III. He wants to deprive poets of their power to enchant, and something Socrates admits in the tenth book of the Republic, to which he himself has been highly susceptible to the enchantment of the poets. We need to deprive, again, the poets, the song makers, the lyricists, the musicians, the mythmakers, the storytellers, all of them, the power to enchant us. And in place of the pedagogical power of poetry, Socrates proposes to install philosophy in its place. As a result, the poets will have to be expelled from this city. Imagine that. Sophocles will be expelled from the just city that Socrates wants to create. This always raises the question that you will discuss in your section, whether or not Socrates' censorship of poetry and the arts is an indication of his totalitarian impulses. This is the part of the Republic most likely to call up our own first amendment instincts. "Who are you, Socrates," we are inclined to ask, "to tell us what we can read here and listen to?" And furthermore, Socrates seems to be saying not that the Kallipolis will have no poetry and music, it will simply be Socratic poetry and music. And there's another question which you would no doubt be concerned to discuss, namely what would such Socratically purified music and poetry look like? What would it sound like? I don't know that I have an answer to this, but perhaps the Republic as a whole is itself a piece of this Socratic poetry that will substitute for the Homeric kind. But it's important to remember that the question of education and the question of the reform and censorship and the control of poetry is introduced in the context of taming the war-like passions of Glaucon and others like him. The question of censorship and the telling of lies is introduced, in other words, as a question of military necessity, controlling the guards or the auxiliaries of the city, its warrior class. Nothing is said here about the education of farmers, artisans, merchants, laborers, the economic class. Maybe, to speak bluntly, Socrates just doesn't care that much about them. It's okay what they listen too. Nor has anything really been said up to this point about the education of the philosopher. His interest here is in the creation of a tight, and highly disciplined cadre of young warriors who will protect the city much as watchdogs protect their own home. That is to say, recalling Polemarchus, those who are good to friends and bark and growl at strangers. Such individuals will subordinate their own desires and pleasures to the group, and live a life by a strict code of honor. We have to ask: are Socrates' proposals unrealistic? Are they undesirable? Or are they desirable? They are not undesirable if you believe as he does that even the best city must provide provisions for war, and therefore a warrior's life, a soldier's life, will require harsh privation in terms of material rewards and benefits as well as a willingness to sacrifice for others. It would seem far from being unrealistic, Socrates engages what we might call maybe a kind of Socratic realism. Far more unrealistic would be the belief of those who argue, and I'm thinking here of names like Immanuel Kant and others from the eighteenth and nineteenth century, that one day we can abolish war altogether, and therefore abolish the passions that give rise to conflict and war. So far Plato believes, is a passionate or spirited aspect of nature remains strong so long will be necessary to educate the warriors of society who defend it. So on that I'm going to end today and next time we will talk about justice, the philosophers and Plato's discovery of America.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
9_The_Mixed_Regime_and_the_Rule_of_Law_Aristotles_Politics_VII.txt
Professor Steven Smith: I want to begin today with concluding Aristotle, part three. Before I do, however, could I just ask the people--yes, thank you so much. And your neighbor, too? Would you mind? Thank you so much. Just out of respect to Mr. Aristotle. It's mister to you. I want to talk today about Aristotle's discovery of America. This will probably come as a surprise to some of you that Aristotle discovered America, but I will get to that in a minute. In many ways for Aristotle, as it is for every student of politics, the most serious, the most difficult issue one confronts is the problem of faction. How to control for factions. How to control for conflict between factions. That is the issue addressed especially in Books IV and V of the Politics, where Aristotle goes on to describe by the term polity, the regime that he believes most successfully controls for the theme of faction. The essential feature of this regime, the polity, which in fact he gives the name politea, the generic Greek word for regime. The polity is the regime that represents, for Aristotle, a mixture of the principles of oligarchy and democracy. Therefore, he says, avoids dominance by either extreme. By combining elements, as it were, of the few and the many, polity is characterized by the dominance of the middle class, the middle group. The middle class, he says, is able to achieve the confidence of both extreme parties where at least it is sufficiently numerous to avoid the problems of class struggle and factional conflict. "Where the middling element is great," Aristotle writes, "factional conflict and splits over the nature of regimes occur least of all." So Aristotle, in a way, has discovered long before James Madison's famous article in Federalist Number 10, the remedy for the control and containment, so to speak, of faction. You remember, many of you if not all of you who have read the Tenth Federalist Paper, that Madison outlines a scheme for an extended republic, he says, where numerous factions, in many ways, check and balance one another, compete with one another and therefore, avoid the dominance of a single faction leading to the kind of tyranny of the majority, the tyranny of the majority class. Aristotle's proposal for a mixture of oligarchy and democracy seems, in many ways, to anticipate, 2,000 years before the fact, Madison's call for a government where powers must be separated and where, he says in Federalist Number 51, ambition must be made to counteract ambition in order to avoid, in other words, the extremes of both tyranny and civil war. The inevitable conclusion that I reach and I believe any sensible reader of Aristotle would reach is that Aristotle, in fact, discovered the American Constitution 1,500 or 2,000 years before it was written. This may seem surprising to you, since of course Aristotle lived long before. But that may simply be our own prejudice to think that my friend at the CUNY Graduate Center, Peter Simpson, has argued in a paper that I found quite convincing, that Aristotle had, in fact, discovered the American Constitution. I say, it may simply be our prejudice that he didn't. Aristotle writes, in Book II of the Politics that the world is eternal and everything in it has been discovered. The earth experiences, he says, certain periodic destructions and cataclysms, civilizations are reduced to barbarism only to recover and grow again. If this theory, you might say, of sort of cataclysmic change is true, we cannot rule out the possibility that a constitution like ours or even identical to ours existed at some point in the ancient past, in the far distant past that Aristotle knew about. Do you think that's possible? Well, why not? But Aristotle's mixed constitution differs from ours still in certain important respects. Aristotle understands the mixed constitution as a balance of classes--the one, the few, and the many. He doesn't so much insist, as you will see in your reading, on the actual separation of functions of government, putting them into separate hands. It is enough for him, he says, if each class shares in some aspect of the governing power. But that leads to a further difference. We tend to think of the separation of powers doctrine as necessary for the security and liberty of the individual, don't we? We usually think of individual freedom and security as the purpose of the separation of power. It is when political functions become concentrated into the hands of, again, the too few hands that we risk arbitrary government and the endangerment to the liberty of the individual. But for Aristotle, it is not the liberty of the individual so much as the functioning or functional well-being of the city that is the highest priority. Individual freedom may be, at best, a desirable byproduct of the Aristotelian mixed regime, but individual freedom is not its defining or principle goal. For anyone interested in this difference, I suggest you contrast or compare Aristotle's account of mixed government to Book XI of Montesquieu's Spirit of the Laws or to some of the central Federalist papers to see the way in which Mr. Aristotle revised, in some ways, the wisdom of Madison. You could compare them in some way that I think would be valuable. Not only did Aristotle understand the importance of the separation of powers doctrine and the kind of balance of factions as a way of controlling conflict and struggle, but he also understood the importance of property and private property and commerce for a flourishing republic. We didn't really pause to talk much about this, but in Book II, you remember, he criticizes at considerable length Plato's Republic for the excessive unity it demands of its citizens. Socrates demands for common ownership of property, at least among the auxiliary class. But Aristotle claims that the city is not naturally one. That is to say, a certain diversity is necessary to make up a city. Where all property is held in common, it is more likely to suffer from common neglect than it is from common ownership. He clearly understands, in many ways, the virtues of private property and of commerce. It is evident, Aristotle writes, that as it becomes increasingly one, as it becomes increasingly unified--the city--it will no longer be a city. A city is, in its nature, a multitude. As it becomes more of a unity, it will be like a household instead of a polis and a human being instead of a household. There we see in Book II, Aristotle offering his criticism of the claims for the sort of excessive unification of centralization, concentration of property. Yet, despite his awareness of the importance of commerce and the importance of property, the aim of the city, he tells us, is not wealth, is not the production of wealth. In that way it would be useful to make a contrast between Aristotle and someone like Adam Smith, the great author of The Wealth of Nations. If wealth were the purpose of politics, Aristotle writes, the Phoenicians, you might say, in the ancient world--the Phoenicians were the commercial people par excellence--the Phoenicians would be the best regime. But he denies that. Aristotle could never endorse the view stated by a famous American president that the business of America is business. The political partnership, he says, must be regarded for the sake of noble acts performed well. Wealth, property, he tells us, exists for the sake of virtue, not virtue for the sake of wealth. Just as Aristotle would have been critical of the American tendency to regard government as for the sake of business or for the sake of the economy, he also criticized beforehand the American tendency to organize into clubs, what we call political parties which exacerbate rather than control political conflict. These political clubs or parties use their influence to incense the populous, using their power to whip up dangerous passions that tend to make American politicians closer to demagogues than to statesmen. He would also regard the peculiar American practice of elections, rather than the Greek practice of appointing political offices by lot. He would regard elections as merely exacerbating the tendency to demagoguery, where each person seeking office plays shamelessly to the mob, promising all manner of things that they know they will not and cannot deliver. Think of almost anybody you like. Furthermore, while the American regime in many ways is, in principle, open to all and prides itself on a belief in equality, no doubt Aristotle would remark that its offices are, in fact, open only to the rich and to leaders who can acquire the support of the rich, making it rather an oligarchy in the guise of a republic. So Aristotle was not without his own critique of the American constitution and American political culture. There is, obviously, much in the American regime that Aristotle would have found admirable, even though it does not conform to his idea of the best regime, which is the subject of the last two books of the Politics, Book VII and VIII. Aristotle is very sketchy here about the structure, the institutional structure, the make-up of the best regime, acknowledging the best regime is one where the best men rule. That is to say, it is a kind of aristocracy or an aristocratic republic. I want to talk about this regime a little bit now, what Aristotle understands to be the requirements or the fulfillments, the necessities, of this aristocratic republic. In these parts of the Politics, Aristotle offers a serious challenge to existing Greek traditions and patterns of political education. Every bit, in many ways, is far reaching as Plato's Republic. In the first place, he tells us the purpose of the best regime, the purpose of Aristotle's Republic is directed not to war, but in fact to peace. The citizen of the best regime, he says, must be able to sustain war if duty requires, but only for the sake of peace and leisure. Again, a critique not only of Sparta, but also of Athens and its imperialistic ambitions. Second, Aristotle understands the purpose of leisure when he says the end of the regime is peace and the purpose of peace is leisure. He doesn't understand by leisure simply relaxation, enjoying your private moments, enjoying your vacation time. Leisure does not simply mean rest or inactivity, but leisure is necessary for education or what he sometimes calls by the term philosophy. By philosophy, he seems to suggest not so much the capacity for abstract or speculative thought, but rather a kind of liberal education that he regards to be the preserve of what he calls by the term the megalopsychos, literally, the great-souled person or the great-souled man. Mega, megalo, being our terms for great and psychos, related to our word psyche, soul. The great-souled person, the great-souled man, the gentleman is, in many ways, for Aristotle, the ideal recipient of this form of education, of liberal education and also, in some respect, the ideal or perfect audience or readership of the book itself. We can begin to see it is clear how Aristotle's best regime differs from Plato's intransigent demand for the rule of philosopher-kings. The megalopsychos, the gentleman, whatever else he is, is not a philosopher in the strict sense. Sociologically, Aristotle makes clear that the megalopsychos, unlike the philosopher, is a person of some inherited wealth, chiefly landed property, but whose way of life will be urban. He will be a member of what we might call the urban patriciate. In the Nicomachean Ethics, Aristotle provides us with a vivid list of the psychological and even physical characteristics that such a person must possess, this megalopsychos. Such a person, he says, exhibits a sort of lofty detachment to the more or less petty things that weigh most of us down. Aristotle tells us he is slow to act, unless something of great importance is at stake. He repays favor with interest so as not to be under any obligations to others. The gentleman, he says, speaks his mind without fear or favor, somewhat like the New York Times, because to dissemble would be beneath him. He may occasionally hurt others, but this is not done out of deliberate cruelty. In addition, Aristotle tells us such a person will possess beautiful but useless things, suggesting the possession not only of wealth, but of a kind of cultivated aesthetic sense. As if that weren't enough, Aristotle tells us that the megalopsychos walks slowly, because to hurry is undignified, is tall and speaks with a deep voice. Very clear about who, again, the ideal statesman or reader, potential statesman the reader of this book would be. Most importantly, you might say, what distinguishes the gentleman as a class from the philosophers is a certain kind of knowledge or practical intelligence. The gentleman may lack the speculative intelligence of a Socrates, but he will possess that quality of practical rationality, of practical judgment necessary for the administration of affairs. Aristotle calls this kind of knowledge, this kind of practical judgment, he calls it by the term phronimos, that I have on the blackboard. The person who possesses it is the phronimos, a person of practical judgment. Again, a term that captures something of our meaning of common sense, practical wisdom, the capacity for judging, the capacity for judgment, which is not the same thing, obviously, as speculative or philosophic intelligence. The phronimos is the person who is able to grasp the fitting or the appropriate, the appropriate thing to do out of the complex arrangements that make up any situation. Above all, such a person embodies that special quality of insight and discrimination that distinguishes him or her from people, again, of more theoretical or speculative cast of mind. How is this quality of phronimos, of judgment, of practical wisdom, of horse sense, how is it acquired? Aristotle tells us that this kind of knowledge is a kind of knowledge most appropriate to politics. Again, it is neither--and he wants to be clear about this--it is neither the theoretic knowledge aimed simply at abstract truths, nor is it the productive knowledge, what he calls techne, the productive knowledge used in the manufacture of useful artifacts. What is it, then? It is a knowledge of how to act where the purpose of action is acting well. You might say it is less a body of true propositions than a shrewd sense of know-how or political savvy. This kind of knowledge entails judgment and deliberation, the deliberative skill or the deliberative art. We only deliberate, Aristotle says, over things where there is some choice. We deliberate with an eye to preservation or change, to making something better or to preserve it from becoming worse. This kind of knowledge will be the art or craft of the statesman concerned above all with what to do in a specific situation. It is the skill possessed by the greatest statesmen, you might say, the fathers of the constitutions, as it were, who create the permanent framework in which allows later and lesser figures to handle change. This is the kind of political skill and wisdom, again, possessed of the founders of cities, the legislative founders of regimes. Aristotle's Politics is a book about the kind of knowledge requisite for that kind of skill. This quality of practical judgment phronimos, practical wisdom, was developed, I think, in a beautiful essay, without any explicit reference to Aristotle, by the English political philosopher Isaiah Berlin. Anyone here ever heard of Isaiah Berlin? Not one of you? Famous, famous English philosopher, died a number of years ago in the late ‘90s. This, I hope, will be an inspiration to--you should read Isaiah Berlin. In any case, he wrote a wonderful essay called Political Judgment. In it he asks, "What is the intellectual quality that successful statesmen possess that distinguishes their knowledge from all other forms of rationality and knowledge?" He writes as follows. I'm going to quote him. "The quality that I am attempting to describe is that special understanding of public life, which successful statesmen have, whether they are wicked or virtuous. That which Bismark had or Talleyrand or Franklin Roosevelt or, for that matter, men such as Cavour or Disraeli, Gladstone or Ataturk in common with the great psychological novelists, and something which is conspicuously lacking in men of more purely theoretical genius, such as Newton or Einstein or Bertrand Russell or even Freud." So there, too, like Aristotle, he distinguishes a kind of practical skill possessed by the greatest minds, political minds at least, and says it's quite different and from what he calls the great psychological novelists, from that possessed of the greatest philosophers and scientists. "What are we to call this capacity?" Berlin continues. He writes, again, as follows. "Practical reason, perhaps is a sense of what will work and what will not. It is a capacity for synthesis rather than analysis, for knowledge in the sense in which trainers know their animals or parents their children or conductors their orchestras, as opposed to that in which chemists know the contents of their test tubes or mathematicians know the rules their symbols obey. Those who lack this quality of practical wisdom, whatever other qualities they may possess, no matter how clever, learned, imaginative, kind, noble, attractive, gifted in other ways they may be, are correctly regarded as politically inept." There, Berlin tells us something about the character of this political knowledge that Aristotle describes as phronimos. Again, how is this knowledge acquired? Are we just born with it? Do some people just have it or is it a product of experience? Aristotle doesn't say, but I think the answer is clearly some of both. It is a quality, as I agree with Berlin, possessed by some of the great psychological novelists. I mention the names of Tolstoy, Henry James, and perhaps the greatest of all, Jane Austen, if you want to know a novelist who employs this great skill of judgment, discrimination and practical reason. It is also a virtue of great statesmen. Principally, Berlin mentions Bismark, Disraeli, Franklin Roosevelt. I would also add the names of Pericles, Lincoln, and Churchill. Read their works. Study their histories. They provide a virtual education in statecraft, in how to negotiate affairs in precisely the way Aristotle would have us do. That leads me to the larger question, you might say, which is posed throughout Aristotle's work as a whole. What is Aristotle's political science? What is it for? What is he attempting to do? Already you could say to ask this question is to state a claim. Does Aristotle have a political science, a science of politics? Again, if so, what is it about? To begin to answer this question, you might say even begin to think about it in the right way, requires that we stand back from Aristotle's text for a while and ask some fundamental questions about it. What does Aristotle mean by the political? What is the goal or purpose of the study of politics, and what is distinctive about Aristotle's approach to the study of political things? Today, the term "political science" stands for one among a number of different disciplines that we call collectively the social sciences. This not only includes political science, but economics, sociology, anthropology, psychology, among others. Each of these disciplines seeks to give us a handle on a distinctive set of human actions and interactions. Economics deals with the transactions involving the production and distribution of wealth, sociology with the transaction governing status and class, anthropology with the domain of culture, and so on. What is it that political science studies and what is its relation to the other disciplines? The core of political science, at least according to Aristotle and to this degree I'm very much an Aristotelian, what distinguishes it from all other studies is the concept of the regime, of the politea. The regime, for him, is not one branch of human activity among others, it is the fundamental principle or ordering principle that makes all the others even possible. This is why Aristotle does not regard the study of politics as one social science among others. It is rather what he calls the master science that determines the rank and place of all the others within the polity. His study of the regime, that is to say the underlying constitutional principles that govern each order is what distinguishes Aristotle from the other social scientists. When you came into this class in the beginning of the semester, you may have thought you were just signing up for a class in political science. You did not know, perhaps, that you were coming in to study what he calls "the master science," the science of sciences," in some respects. It is that priority that Aristotle attributes to the regime that I think is what distinguishes his kind of political science from that of today. Today, you might say political scientists and social scientists, they're more modest in ascribing priority to any particular branch of knowledge. With, I should say, the possible exception of the economists, who often will believe that economic motives and transactions provide the key to all possible human behaviors. Who knows, maybe they're right, but Aristotle would deny it. For Aristotle, however, politics has a priority to all the others, because as he has argued, man is the political animal. To be a political animal means first to possess speech or reason that allows us to participate in a community or a way of life governed by shared standards of justice and injustice. Aristotle's political science presupposes, in other words, a certain conception of human beings as linguistic animals who are capable not only of living together--so do a range of other species--but rather sharing in the arrangements of rule. It is our logos, our reason that makes a community possible and also expresses or creates, you could say, a certain latitude or indeterminacy in how our behavior distinguishes us from other species. It is precisely, he believes, this latitude that makes political communities not only sites of agreement over shared standards, but also, as he says, sites of moral contestation over justice and injustice. Politics is about conflict and conflicts over justice. To be a political animal, for him, is to engage or to be engaged in this ongoing conversation and debate over the very nature of justice, to refuse to participate in that conversation, to declare oneself an outsider to it, he says, is either to be below humanity or above it. To be human is to be part of that conversation. The centrality Aristotle ascribes to politics forces us to consider another question, namely what is the purpose of this study? Why do we engage in it? At first glance, this seems to be overwhelmingly obvious--to gain more knowledge. But knowledge of what and for what? Most people today are attracted to the study of politics because they are interested in things they've read about in newspapers or seen on TV. Things like elections, political leaders and parties, different causes to which they may feel some attraction, interested in wars and revolutions that they see or have heard about. It is to learn more about these things that we come to the study of politics. It's as true now as it was in the time of Aristotle. Aristotle certainly recognized that the accumulation of political knowledge, you might say the gathering of data, the organization of facts, is very important. Books III, IV, V of the Politics shows the empirical side of Aristotle's politics. Again, let me just pose the question. What is this knowledge for? What does Aristotle intend to do with it or want us to do with it? Politics, political science, he tells us in the Ethics again, is not a theoretical subject in the matter of physics or metaphysics or mathematics. That is to say, its purpose is not knowledge for its own sake. However important the study of politics may be, it exists not for the sake of knowledge, but for action, as he tells us, for praxis, is his word for action. Political science exists for the sake of the human good and the opening sentence of the Politics confirms this. He says, we see, everyone does everything for the sake of some apparent good. All action, all human behavior is aimed at achieving some type of good, is all aimed at action. All political action aims at preservation or change. When we act, we seek to preserve or to change. All political action, you might say, is guided by the idea of better or worse. It implies a standard of better and worse and this implies some idea of the good by which we judge. So it follows, at least Aristotle believes so, that the study of politics is not, again, for the sake of knowledge for its own sake, but knowledge that serves the regime. It helps to make it better or prevent it from becoming worse. Its goal is not just to know more, but to know how and this requires not only theoretic acumen, but political judgment and the kind of practical knowledge that Aristotle discusses at length. This quality of practical judgment and reflection is, again, somehow unique to the political art or the political skill Aristotle tells us. It is the ability not only to keep the ship of state afloat, but allows the greatest statesmen to guide the ship, to steer it safely to port, that is to say the kind of knowledge needed by the statesmen. Aristotle's political science, then, is ultimately the supreme science of statecraft, a term that again we don't hear much about--statecraft or statesmanship. It's regarded perhaps by today's political science as too value laden, too subjective to speak of statesmanship or statecraft. This, too again, is a word that carries distinctive and strong connotations. Who is a statesman? What are the attributes of the statesman? I've spoken a little about the attributes that Aristotle believes are essential to the megalopsychos, the greatest of the statesmen. This will be quite different, for example, from the qualities we will see beginning on Friday and next week that Machiavelli and later Hobbes or Locke, believe are necessary for the great founders or statesmen. Plato and Aristotle give their own vision--the philosopher-king, the great-souled man or megalopsychos. But the statesman, again, to the highest degree is the founder of regimes, laws, and institutions. They provide the constitutional framework within which we, later figures, operate. So if Aristotle's political science is an education for statesmanship, you might say what are its methods? What are its distinctive methods? How do we educate a statesman? How do we educate potential statesmen? What are its methods? This is a question asked, you might say, of every mature science. It is possession of a method that distinguishes a mature science from simply a jumble of facts, hearsay, inspired guesses, or a random collection of insights and observations. Without a distinctive method for obtaining and organizing knowledge, we are all just groping in the dark. So what is the distinctive method of Aristotle's Politics? To some degree, Aristotle refuses to play the methodologist's game. In a well-known passage from the Ethics, he says that our discussion will be adequate if it achieves clarity within the limits of its subject. If it achieves clarity within the limits of its subject. In other words, he seems to be saying it is wrong to demand methodological purity in a subject like politics where there is always great variety and unpredictability. It is the mark, he says, of an educated person, presumably a liberally educated person, not to demand more precision than the subject matter allows. But that formulation seems, in many ways, to be question begging. How much precision does the subject allow? How do we know? There will always, he suggests, appear to be something ad hoc about the methods used in the study of politics. We will have to let the method fit the subject, rather than demanding the subject matter fit a kind of apriori method. To insist on that kind of methodological purity, he implies, would be to impose a false sense of unity, a false sense of certainty or absoluteness on the study of politics, which is variable and contingent and always subject to flux and change. Even while Aristotle may deny that there is a single method appropriate to the study of politics, he proposes a set of common questions that political scientists have to address. He lays out these questions at the very beginning of the fourth book of the Politics. He lists four such questions. The political scientist, he says, must have a grasp of the best regime, given the most favorable circumstances. Second, he tells us, the political scientist must consider what kind of regime will be best under less than optimal circumstances. Third, the political scientist must have some knowledge of how to render any regime, no matter how imperfect it may be, more stable and coherent. Finally, the political scientist must know something about the techniques of reform and persuasion, what we might call the area of political rhetoric by which existing regimes can be brought closer to the best. Taken together, these four questions are intended to guide inquiry, to shape and direct inquiry. They are not intended to yield sure or certain results, but to guide and inform statesmen and citizens in the business of decision-making. Bearing in mind that political science is a practical science, a science of judgment, a science aimed at directing action under specific circumstances and situations, it is important, Aristotle finally suggests, that the language of political science express the common sense or ordinary language of political actors. There is virtually no jargon in Aristotle's Politics. Aristotle's political science stays entirely within the orbit of ordinary speech. Such language does not claim to be scientifically purged of ambiguity, but rather adopts standards of proof appropriate to people in debates and assemblies, in courts of law, in council rooms and the like. The language of Aristotelian political science is the language of man, the political animal. You will never hear him speaking in terms of dependent or independent variables. You will never hear him using technical jargon, artificially imported into the science of politics or the study of politics from the outside. What most distinguishes Aristotle is that his language is addressed emphatically to citizens and statesmen, not to other political scientists or philosophers. It has a public orientation. It is publicly directed. It is public spirited. It is concerned with the common good. Contrast that with today's political science. Today, it seems, political scientists are more concerned with advancing the abstract truths of science and claims about creating a methodologically rigorous and pure science of politics, where Aristotle is more concerned with the regime. Modern political science, in many ways, claims to stand above or apart from the regime, to be objective, to be disinterested, as if it were viewing human affairs from a distant planet. Aristotle takes his stand from within politics and the regime of which he is a part. Of course, we all know contemporary political scientists are not neutral. They frequently insert their views, values we call them, value judgments we call them. They insert them into their discussions. These values are regarded by them as purely subjective, again, their own value judgment so to speak, and not strictly speaking a part of the science of politics. But we all know, do we not, that most contemporary political scientists tend to be liberals. Their values are liberal values. This raises a question. Whether the relation between contemporary political science and liberalism is merely accidental or whether there is some intrinsic, some necessary connection between them. One might do well to ponder which political science is really more scientific--Aristotle's, which is explicitly and necessarily evaluative and that offers advice and exhortation to the statesmen and citizens about how to care for their regime, or contemporary political science that claims to be neutral and nonpartisan, but which smuggles its values and preferences in always through the back door. On this very partisan note I conclude. On Friday, let me just remind you, Il Principe. We'll study Machiavelli.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
2_Socratic_Citizenship_Platos_Apology.txt
Professor Steven Smith: Today we start with Plato, Plato's Apology of Socrates. This is the best introductory text to the study of Political Philosophy. Why? Let me give you two reasons. First, it shows Socrates, the reputed founder of our discipline, the founder of Political Science, and I will say a little bit more about that later on today, explaining himself and justifying himself, justifying his way of life before a jury of his peers. It shows Socrates speaking in a public forum, defending the utility of philosophy for political life. And, secondly, the Apology demonstrates also the vulnerability of political philosophy in its relation to the city, in its relation to political power. The Apology puts on trial not merely a particular individual, Socrates, but puts on trial the very idea of philosophy. From its very beginnings, philosophy and the city, philosophy and political life, have stood in a sort of tension with one another. Socrates is charged, as we will see, by the city for corrupting the youth and impiety toward the Gods, right? In other words, he's accused of treason, a high capital offense. No other work of which I am aware helps us better think through the conflict. I would even say, the necessary and inevitable conflict, between the freedom of the mind and the requirements of political life. Are these two things, are these two goods as it were, freedom of mind and political life, are they compatible or are they necessarily at odds with one another? That seems to me to be, in some ways, the fundamental question that the Apology asks us to consider. Okay? Now for generations, the Apology has stood out as a symbol for the violation of free expression. It sets the case for the individual committed to the examined life over and against a bigoted and prejudiced multitude. The clearest statement of this view of, again, the individual set against the mob in some ways, is found in a work of a very famous civil libertarian of the nineteenth century, a man named John Stuart Mill. In his famous tract called simply On Liberty, Mill wrote, "Mankind can hardly be too often reminded that there was once a man named Socrates between whom and the legal authorities of his time there took place a memorable collision." Over and again, and Mill is a kind of a famous case of this, Socrates has been described as a martyr for freedom of speech and he has been somewhat extravagantly compared at various times to Jesus, to Galileo, to Sir Thomas More and has been used as a role model for thinkers and political activists from Henry David Thoreau, to Gandhi, to Martin Luther King. So, Socrates has become a very central symbol of political resistance and resistance to political power, and, of the dangers to the individual of unchecked rule. But, this reading of the Apology as you might say, is a kind of brief for freedom of expression and a warning against the dangers of censorship and persecution. Although this has been enormously influential over the centuries, at least over the last century and a half, you have to ask yourself: is this the reading that Plato intended? Did Plato want us to read the dialogue this way? As a teacher of mine used to say, "You read Plato your way, I'll read him his way." But, how did Plato intend this dialogue to be understood? Note that Socrates never defends himself by reference to the doctrine of unlimited free speech. He doesn't make that claim. He doesn't make the claim about the general utility of freedom or unlimited speech. Rather, he maintains as he puts it near the end of the defense speech, that the examined life is alone worth living. Only those, in other words, engaged in the continual struggle to clarify their thinking, to remove sources of contradiction and incoherence, only those people can be said to live worthwhile lives. "The unexamined life is not worth living." Socrates confidently, defiantly asserts to his listeners, to his audience. Nothing else matters for him. His, in other words seems to be a highly personal, in many ways, highly individual quest for self perfection and not a doctrine about the value of freedom of speech in general. But, even though you might say, Socrates seems to be engaged in, again, this highly personal quest for self perfection, there is something, which one can't avoid, deeply political about the Apology and about his teaching. At the heart of the dialogue or at the heart of this speech rather is a quarrel, a quarrel with his accusers over the question, never stated directly perhaps, but over the question of who has the right to educate future citizens and statesmen of the city of Athens. Socrates' defense speech, like every platonic dialogue, is ultimately a dialogue about education. Who has the right to teach, who has the right to educate? This is in many ways for Socrates the fundamental political question of all times. It is the question of really who governs or maybe put another way, who should govern, who ought to govern. Remember also that the city that brought Socrates to trial was not just any city, it was a peculiar kind of city, it was Athens. And Athens was, until only fairly recent times in human history, the most famous democracy that ever existed. I say fairly recent times until, you know, the American democracy. But it was, until at least the eighteenth or nineteenth century, the most famous democracy that ever existed. The speech of Socrates before the jury is perhaps the most famous attempt to put democracy itself on trial. It is not merely Socrates who is on trial. Socrates intends to put the democracy of Athens itself on trial. Not only does the Apology force Socrates to defend himself before the city of Athens, but Socrates puts the city of Athens on trial and makes it defend itself before the high court of philosophy. So, the ensuing debate within the dialogue can be read as a struggle again over who has title to rule. Is it the people? Is it the court of Athens, the dẽmos, to use the Greek word for "the people," or is it Socrates the philosopher-king who should be vested with ultimate political authority? That is, of course, the quest and it's taken up in a very vivid way, much more explicit way in the Republic, but it runs throughout the Apology and you can't really understand the Apology unless you see that this is the question that Socrates is posing throughout. So, I have some names put on the board and some dates, because I want to talk a little bit about the political context of this dialogue. One can of course read, there's nothing wrong with reading the Apology, again, as a kind of enduring symbol of the plight of the, you might say, the just individual confronted with an unjust mob, or an unjust political rule. It's, again, a question that Plato takes up in the Republic when a character in the book named Glaucon who happens to be, as it were, the brother of Plato, asks Socrates if it is actually better to be just or simply to have the reputation for justice? And Socrates says it is better to be just, even if that results in persecution and death. But the trial is not, again, just an enduring symbol of justice versus injustice, it is an actual historical event that takes place in a particular moment of political time and this bears, I think, decisively on how we come to understand the case both for and against Socrates. Let me talk a little bit about that context. The trial of Socrates takes place in the year 399 and all of these refer to before the common era, 399. Some of you will know that that trial follows very quickly upon the heals of the famous Peloponnesian War. This was the war related by Socrates' slightly older contemporary, a man named Thucydides who wrote the history of the Peloponnesian War, a war that took place between the two great powers of the Greek world between the Spartans and their allies and Athens and its allies. The Athens that fought this war against Sparta was an Athens at the height of its political power and prestige under the leadership of its first citizen Pericles, whose name is also up there at the very top. Under Pericles, Athens had built the famous Acropolis. It had established Athens as a mighty and redoubtable naval power and it created an unprecedented level of artistic and cultural life, even today known simply as Periclean Athens. But Athens was also something completely unprecedented in the world, it was a democracy. And, again, even today the expression "Athenian democracy" connotes an ideal of the most complete form of democratic government that has ever existed. "We are the school of Hellas." This is what Pericles boasts to his listeners in the famous funeral oration told by Thucydides. "We throw our city open to the world and never exclude foreigners from any opportunity of learning and observing, even though the eyes of an enemy may profit from our liberality," Pericles boasts once again. The question maybe you want to ask about this is how could the world's first freest and most open society sentence to death a man who spoke freely about his own ignorance and professed to care for nothing so much as virtue and human excellence? Now, at the outbreak of the Peloponnesian War, Socrates was just under 40 years of age. And, we learned from the speech that Socrates himself served in the military and served in defense of his country. The war, the Peloponnesian War, was fought as you can see over a considerable length of time, on and off for almost a period of 30 years and was concluded in the year 404 with the defeat of Athens, the installing of a pro-Spartan oligarchy, a pro-Spartan regime known simply as the Thirty Tyrants who ruled Athens for a year. The next year, 403, the Tyrants, The Thirty as they were called, were driven out and a democratic government was once again reestablished in Athens. Just three years later, three men named Anytus, Meletus and Lycos, all of whom had been part of the democratic resistance movement against the Spartan oligarchy, brought charges against Socrates. The charges against him were: corrupting the young and disbelieving in the Gods that the city believes in. So, you can see that the charges were brought by people who were themselves, again, part of a democratic resistance movement and the names of Anytus and Meletus as you've read, you know, appear in the speech itself. So, the charges brought against Socrates did not simply grow out of thin air. Maybe we should rephrase the question. Not why did the Athenians bring Socrates to trial? But, why did they permit him to carry on his practice of challenging the law and the authority of the law for as long as they did? Okay? Add to this the fact that when Socrates was brought to trial again, the democracy had only recently been reestablished but that many friends and former students of Socrates had been themselves implicated in the rule of the hated Thirty Tyrants. Among the members of The Thirty was a man named Critias, and there's actually a platonic dialogue named after him, a man named Critias, who was a relative of Plato's and another man named Charmides whose name is also the title of a platonic dialogue, Charmides who is Plato's uncle. Plato himself, he tells us much later in life in his famous Seventh Letter, Plato himself was invited by his relatives to help to form a part of the government of The Thirty and later Plato said, "That so abhorrent did they become that they made the older democracy look like the Golden Age." So, the point I'm suggesting is that many of Socrates' students and associates, including Plato himself, had some connection with this oligarchical government that had ruled Athens for a brief time. And, Socrates was himself not above suspicion. We often, don't we even today yes, we often judge teachers by their students, by the company they keep, yes, don't we? No one is above suspicion. Socrates himself had been a close associate of a man named Alcibiades, probably the most prominent Athenian in the generation after Pericles. Alcibiades was the man who engineered the disastrous Sicilian expedition and later ended his life as a defector going to Sparta. His complex relationship with Socrates is, by the way, recounted in the drunken speech that Alcibiades gives in Plato's dialogue, Symposium. So, you can see that the trial of Socrates, the little speech that you have read, takes place in the shadow of military defeat, of resistance, of conspiracy and betrayal. Socrates was 70 years old at the time of the trial. So, this was a highly charged political environment. Far more volatile than for example the kind of partisan quarrels we see today in our republic, I hope. Okay? So, let me talk about the accusations, let me move from the political context of the speech to the accusations. And, I say accusations because there, as you read, if you read closely you will see there were actually two sets of accusations leveled against Socrates. Early in the speech Socrates claims that his current accusers Anytus and Meletus, again, the democratic resistance fighters, the charges they have brought against him are themselves the descendants of an earlier generation of accusers who were responsible for, he claims, maligning and creating an unfavorable prejudice against Socrates. "These charges are not new," he tells the jury, and many members of the jury, he says, will have formed an unfavorable opinion about him. This was the day before there were intense forms of jury selection, where they would ask people: "Do you have a view of the case?" Many of the jurors would have known Socrates, or certainly would have heard of him and, he says, would have had already an unfavorable opinion formed about him by this earlier generation of accusers. Reference he makes to a comic poet, yes, a comic poet, an unequivocal reference to the playwright Aristophanes, whose name I have put up on the board. Aristophanes is the one who created the original or the initial prejudice against Socrates. What was that prejudice that Aristophanes, this comic poet, had created? The allusion to Aristophanes and the comic poet is a part of what Plato calls in Book X of the Republic, the old quarrel between philosophy and poetry. This quarrel is a staple of Plato's dialogues, is a central theme, not only of the Symposium in which Aristophanes and Socrates are actually shown at the same dinner table with one another. But, it is also a key feature of the Republic which we will be reading in a week, where Socrates offers an elaborate proposal for the censorship and control of poetry, if it is to be made compatible with the demands of political justice. In fact, in a way you cannot understand the Republic unless you understand the poetic backdrop to it and Socrates' long standing engagement with the poetic tradition and this back and forth between himself and the man he calls this comic poet. The core of this quarrel between the philosopher and the poet, between Socrates and Aristophanes is not just an aesthetic judgment or it is not simply an aesthetic quarrel it is, again, deeply political or at least has something very political about it. It gets to the essence of the question of who is best equipped to educate future generations of citizens and civic leaders. Are the philosophers or are the poets, you might say, the true legislators for mankind, if you want to use Shelley's dictum? Which one legislates for mankind at the time of Socrates? The Greeks already had a century's long tradition of poetic education, going back centuries to the time of Homer and Hesiod that set out certain exemplary models of heroic virtue and civic life. The Homeric epics were to the Greek world what the Bible is to our world that is to say, in some respects the ultimate authority, regarding the way of the Gods, their relation to the world and the type of virtues appropriate to human beings. The virtues endorsed by the poetic tradition of which Aristophanes is the great representative here, the great inheritor and representative, the virtues of this tradition were the virtues of a warrior culture, of war-like peoples and men at war. These were the qualities that had guided the Greeks for centuries and contributed to their rise to power. It contributed to Athens' as well as Sparta's rise to greatness from a small dispersed people, to a great world power and, again, allowed them to achieve a level of artistic, intellectual and political accomplishment akin to Renaissance Florence, Elizabethan England and Thirties Weimar. So, what is at stake in this quarrel between Socrates and the poetic tradition that he alludes to? First, Socrates' manner of teaching is markedly different from the poets, right? Does anyone know here the opening line of the Iliad? Homer's Iliad, does anyone know the first line? Anyone remember that from high school? "Sing Goddess the wrath of Achilles," right? "Sing Goddess the wrath of Achilles." The poets are oracular, right? They call on Gods and Goddesses to inspire them with song, to fill them with inspiration to tell stories of people with super-human strength and courage and anger. By contrast, you could say, the method of Socrates is not oracular. It is not story telling; it is conversational, it is argumentative, if you want to use the word he applies to it, it is dialectical. Socrates makes arguments and he wants others to engage with him, to discover which argument can best withstand the test of rational scrutiny and debate. There are no arguments in Homer's Iliad or Odyssey. You hear strong and compelling stories but no arguments. Socrates makes, in other words, continual questioning and not the telling of stories and the recitation of verses, the essence of this new political education. He questions the methods of teaching of the poets. But, secondly, again, Homer and the poets sing the virtues of men at war. Socrates wants to replace the warrior citizen with a new kind of citizen, a whole new set, you might say, of citizen virtues. The new Socratic citizen, let's call him that for a moment, the new Socratic citizen may have some features in common with the older Homeric warrior. But, Socrates ultimately wants to replace military combat with a new kind of, you might call it, verbal facility, verbal combat, in which again the person with the best argument is declared to be victorious. The person with the best argument, let the best argument prevail. The famed Socratic method of argumentation is basically all that remains of the older pre-Socratic culture of struggle and combat. The new Socratic citizen is to be trained in the art of argument and dialectic, and we will talk a little later about what that means. So, it is a challenger to the poets and all they stand for, the century-long tradition of poetic education that Socrates asserts himself, that Socrates presents himself. The Apology shows Socrates as offering a new model of citizenship, a new kind of citizen. His challenge to the poets is in a way the basis for the resentment that is built up against him, in that Aristophanes and what he calls the earlier accusers have brought to bear. In fact, you might say, so seriously was Socrates taken by Aristophanes and the poets, that Aristophanes devoted an entire play, he wrote an entire play, about Socrates called the Clouds, devoted to debunking and ridicule Socrates' profession of learning. Aristophanes' play sometimes is even included in certain editions of the book you're reading, like this one, it has the edition of Aristophanes' Clouds in it, along with the Apology and Crito. The existence of that play shows to all of us just how seriously Socrates was taken by the greatest of his contemporaries and Aristophanes was, along with Sophocles and Euripides and others, among the greatest of the Greek playwrights. The mockery, you might say, mockery of Socrates, remains one of the sincerest forms of flattery; they took him very seriously. Let me just say something about the Clouds, this comic play, this satire on Socrates, because it is part of that initial accusation that Socrates says is leveled against him. Here, Aristophanes presents Socrates as an investigator, and this is part of the first charge, remember an investigator of the things aloft and the things under the earth and who makes the weaker argument the stronger. That's the argument that Socrates says Aristophanes brings against him. In this play, Socrates is presented as the head, the leader, the director of what we might think of as the first think tank known to human history. It's called in the play itself the Phrontisterion which means, or is sometimes translated as the Thinkery or the Thinketeria or simply a kind of think tank where fathers, Athenian fathers, bring their sons to be indoctrinated into the mysteries of Socratic wisdom. And in the play Socrates is shown hovering, flying above the stage in a basket in order to be able to better observe the clouds, the things aloft, right? But, also in many ways symbolizing Socrates', at least on the Aristophanes' account, Socrates' detachment from the things down here on earth, the things that concern his fellow citizens. Socrates is a kind of what in German people would call Luftmensch. He's a man up in the air, you know, he's so detached, he doesn't have his feet on the ground. And Socrates is shown not only mocking the Gods in doing this, but he is shown by Aristophanes to teach incest and to teach all of the things that violate every decent, human taboo--incest, the beating of one's parents, all these kinds of things. Socrates is presented as exhibiting kind of a corrosive skepticism which is at the core of Aristophanes' charge against him. To make a long story short, the play concludes with Socrates' think tank being burned to the ground by a disgruntled disciple. An object lesson for all later professors, I would say, who teach nonsense. Right? Don't get any ideas. Take a match to the department. So, how accurate is that picture of Socrates, the man who investigates the things aloft and the things under the ground? The Clouds was written in 423 when Socrates was in his mid-forties and the Aristophanic Socrates is essentially what we call a natural philosopher. Again, investigating the things aloft, under the ground. He is what we would call today a scientist, a natural scientist. But, this seems quite removed, doesn't it, from the Socrates who is brought up on charges of corrupting the young and the impiety. In the Apology and here is where Socrates actually tells the story, very important in the course of this speech; he provides a kind of intellectual biography of an incident that occurred long before the trial and set him on a very different path. He recalls the story, don't you remember, of a man named Charephon, a friend of his, who had gone to the Delphic Oracle, who had gone to Oracle of Delphi, and asked if there was anyone wiser than Socrates and was told there was not. Socrates tells us that when he was told this he expressed disbelief in the Oracle. He didn't believe it and in order to disprove the Oracle's statement, he says he began a lifelong quest to find someone wiser than himself. A quest, in the course of which lead him to interrogate the politicians, the poets, the craftsmen, all people reputed to be knowledgeable, and his conversations lead him to ask questions, not about natural scientific phenomena, but questions about the virtues, as he tells us, the virtues of a human being and a citizen, what we would call today perhaps moral and political questions. That incident that Socrates tells here represents what one could call the famous Socratic turn, Socrates' second sailing so to speak. It represents the moment in the life of Socrates where he turns away from the investigation of natural phenomena to the study of the human and political things, the moral and political things. The Delphic story for what it's worth marks a major turning point in Socrates' intellectual biography. The move from the younger, we could call him, Aristophanic Socrates, the Socrates who, again, investigates the things aloft and under the earth, to the later, what we could call platonic Socrates. The founder of political science, Socrates is the founder of our discipline who asks about the virtues of moral and political life. Socrates' account of this turn, this major turn in his life and career, leaves a lot of questions unanswered, that maybe even occurred to you as you were reading this dialogue, reading this speech. Why does he turn away from the investigation of natural phenomena to the study of human and political things? The Delphic Oracle is interpreted by Socrates, at least to command engaging with others in philosophical conversation. Why does he interpret it this way? Why does this seem the proper interpretation to engage in these kinds of conversations? It is this Socrates who is brought up on charges of corruption and impiety, yet none of this quite answers the question of what is the nature of Socrates' crime. What did he do? What did corruption and impiety mean? To try to answer those questions we would have to look a little bit at what is meant by this new kind of Socratic citizen. Who is this citizen? The charges brought against Socrates by Anytus and Meletus we see are not the same exactly as those brought against him by Aristophanes, the comic poet. Anytus and Meletus talk about impiety and corruption, not investigating the things aloft and making the weaker argument the stronger. What do these terms mean? Impiety and corruption, in what sense are these civic offenses? What could impiety have meant to his audience and his contemporaries? At a minimum, we would think the charge of impiety suggests disrespect of the gods. Impiety need not be the same thing as atheism, although Meletus confuses the two, but it does suggest irreverence even blasphemy toward the things that a society cares most deeply about. Yes? To be impious is to disrespect those things a person or a society cares most deeply about. When people today, for example, refer to flag burning as a desecration, as desecrating the flag they are speaking the language of impiety, right. They are speaking the language of some kind of religious or quasi-religious desecration. Meletus, whose name in Greek actually means care, accuses Socrates of not caring properly for the things that his fellow citizens care about. So, the question is: "What does Socrates care about"? What does he care about? Consider the following: every society, which we know, operates within the medium of belief or faith of some kind. Take our founding documents, the Declaration of Independence, the Constitution, all men are created equal, that we are endowed with inalienable rights that all legitimate government grows out of consent and the like. These beliefs form something like a kind of national creed, you might say, American, national creed, what it means to be an American and not someone else. Yet, how many people could give a kind of reasoned account of what makes these beliefs true, or what grounds these beliefs? Most of us, most of the time, hold these beliefs as a matter of faith, as a matter of belief, because we have learned about them from childhood, because they were written by Thomas Jefferson or some other reputed high authority. To question those beliefs would seem to exhibit a kind of lack of civic faith, faith in our ruling opinions. In short you might say a lack of civic piety or respect. Socrates clearly believes that piety or faith is the natural condition of the citizen. Every society, no matter of what kind requires a kind of faith in its ruling principles, in its fundamental beliefs. But belief seems to be threatened from at least two sources. One is simple disbelief or unbelief, a kind of rejection of ruling opinion simply because you don't like it. You know, when you see the bumper sticker on the car "Question Authority," this kind of rejection of ruling opinion. But the other source of conflict with ruling opinion is from philosophy. Philosophy is not the same thing as simple disbelief or rejection, but the two can be easily confused. Philosophy grows out of a desire to replace opinion with knowledge, opinion or belief with reason. For philosophy, it is not enough simply to hold a belief on faith, but one must be able to give a rational account, a reasoned account for one's belief, its goal again is to replace civic faith with rational knowledge. And, therefore, philosophy is necessarily at odds with belief and with this kind of civic faith. The citizen may accept certain beliefs on faith because he or she is attached to a particular kind of political order or regime. But, for the philosopher this is never enough. The philosopher seeks to judge those beliefs in the light of true standards, in the light of what is always and everywhere true as a quest for knowledge. There is a necessary and inevitable tension between philosophy and belief, or to put it another way, between philosophy and the civic pieties that hold the city together. From this point of view, I want to say, was Socrates guilty of impiety? On the face of it, the answer to that seems yes. Socrates does not care about the same thing his fellow citizens care about. His opening words to the jury seem to convey this, "I," he says, "am simply foreign to the manner of speech here." This seems to be a statement of his alienation or disaffection from the concerns of his fellow Athenians. I know nothing about what you do or what you care about. Yet it certainly doesn't seem right to say that Socrates does not care at all. He claims to care deeply, perhaps more deeply than anyone has ever cared around him, before or since. And among the things he cares deeply about, he says, is this calling to do nothing as he says "To do nothing but persuade you, both younger and older, not to care for bodies and money, but, how your soul will be in the best possible condition." That concern with the state of one's soul, he tells the jury, has lead him not only to impoverish himself, but to turn himself away from the public business, from the things that concern the city to the pursuit of private virtue. And, here are the words of his that I want to leave you with today from section 31d of the Apology. Socrates writes, "This is what opposes my political activity. And, its opposition seems to me to be all together noble for know well, men of Athens, if I had long ago attempted to be politically active I would long ago have perished and I would benefited neither you nor myself. Now do not be vexed with me when I speak the truth. For there is no human being who will preserve his life if he genuinely opposes either you or any other multitude and prevents many unjust and unlawful things from happening in the city." Rather, he says, "if someone who really fights for justice is going to preserve himself even for a short time, it is necessary for him to lead a private, rather than a public life." Think about that, if someone who really fights for justice is going to preserve himself, it is necessary for him to lead a private, not a public life. How are we to understand Socrates' claim that the pursuit of justice requires him to turn away from public to private life? What is this new kind of citizen, again, concerned with this kind of private virtue, this concern for the virtue of one's soul? That's the question I want us to consider again for next week as we finish the Apology and move our way up to the Crito. Okay? We'll do that for next week.
Introduction_to_Political_Philosophy_with_Steven_B_Smith
1_Introduction_What_is_Political_Philosophy.txt
Professor Steven Smith: Let me start today by asking the question, "what is political philosophy?" Custom dictates that I say something about the subject matter of this course at its outset. This in some ways might seem a case of putting the cart before the horse, or the cart before the course maybe, because how can you say, how can we say what political philosophy is in advance of doing it? Anyway, let me try to say something that might be useful. In one sense, you could say political philosophy is simply a branch or what we call a subfield of the field of political science. Yes, all right. It exists alongside of other areas of political inquiry like American government, comparative politics, and international relations. Yet in another sense, political philosophy is something much different than simply a subfield; it seems to be the oldest and most fundamental part of political science. Its purpose is to lay bare, as it were, the fundamental problems, the fundamental concepts and categories which frame the study of politics. In this respect it seems to me much less like just a branch of political science than the foundation of the entire discipline. The study of political philosophy often begins as this course will do also, with the study of the great books or some of the great books of our field. Political philosophy is the oldest of the social sciences, and it can boast a wealth of heavy hitters from Plato and Aristotle to Machiavelli, Hobbes, Hegel, Tocqueville, Nietzsche, and so on. You might say that the best way to learn what political philosophy is, is simply to study and read the works of those who have shaped the field--yes, right? But to do that is, I recognize, not without dangers, often severe dangers of its own. Why study just these thinkers and not others? Is not any so-called list of great thinkers or great texts likely to be simply arbitrary and tell us more about what such a list excludes than what it includes? Furthermore, it would seem that the study of the great books or great thinkers of the past can easily degenerate into a kind of antiquarianism, into a sort of pedantry. We find ourselves easily intimidated by a list of famous names and end up not thinking for ourselves. Furthermore, doesn't the study of old books, often very old books, risk overlooking the issues facing us today? What can Aristotle or Hobbes tells us about the world of globalization, of terrorism, of ethnic conflict and the like? Doesn't political science make any progress? After all, economists no longer read Adam Smith. I hesitate to... I don't hesitate to say that you will never read Adam Smith in an economics course here at Yale, and it is very unlikely that you will read Freud in your psychology classes. So why then does political science, apparently uniquely among the social sciences, continue to study Aristotle, Locke and other old books? These are all real questions, and I raise them now myself because they are questions I want you to be thinking about as you do your reading and work through this course. I want you to remain alive to them throughout the semester. Yes? Okay. One reason I want to suggest that we continue to read these books is not because political science makes no progress, or that we are somehow uniquely fixated on an ancient past, but because these works provide us with the most basic questions that continue to guide our field. We continue to ask the same questions that were asked by Plato, Machiavelli, Hobbes, and others. We may not accept their answers and it's very likely that we do not, but their questions are often put with a kind of unrivaled clarity and insight. The fact is that there are still people in the world, many people, who regard themselves as Aristotelians, Thomists, Lockeans, Kantians, even the occasional Marxist can still be found in Ivy League universities. These doctrines have not simply been refuted, or replaced, or historically superceded; they remain in many ways constitutive of our most basis outlooks and attitudes. They are very much alive with us today, right. So political philosophy is not just some kind of strange historical appendage attached to the trunk of political science; it is constitutive of its deepest problems. If you doubt the importance of the study of political ideas for politics, consider the works of a famous economist, John Maynard Keynes, everyone's heard of him. Keynes wrote in 1935. "The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood....Practical men," Keynes continues, practical men "who believe themselves to be quite exempt from any intellectual influences, are usually the slave of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back" . So this course will be devoted to the study of those "academic scribblers" who have written books that continue to impress and create the forms of authority with which we are familiar. But one thing we should not do, right, one thing we should not do is to approach these works as if they provide, somehow, answers, ready-made answers to the problems of today. Only we can provide answers to our problems. Rather, the great works provide us, so to speak, with a repository of fundamental or permanent questions that political scientists still continue to rely on in their work. The great thinkers are great not because they've created some set of museum pieces that can be catalogued, admired, and then safely ignored like a kind of antiquities gallery in the Metropolitan Museum of Art; but rather because they have defined the problems that all later thinkers and scholars have had to use in order to make sense of their world at all. Again, we still think in terms of the basic concepts and categories that were created for us long ago. Okay? So one thing you will quickly note is that there are no permanent answers in a study of political philosophy. A famous mathematician once said, "Every question must have a correct answer, for every question one answer." That itself is an eminently contestable proposition. Among the great thinkers there is profound disagreement over the answers to even the most fundamental questions concerning justice, concerning rights, concerning liberty. In political philosophy, it is never a sufficient answer to answer a question with a statement "because Plato says so," or "because Nietzsche says so." There are no final authorities in that respect in philosophy because even the greatest thinkers disagree profoundly with one another over their answers, and it is precisely this disagreement with one another that makes it possible for us, the readers today, to enter into their conversation. We are called upon first to read and listen, and then to judge "who's right?" "how do we know?" The only way to decide is not to defer to authority, whoever's authority, but to rely on our own powers of reason and judgment, in other words the freedom of the human mind to determine for us what seems right or best. Okay? But what are these problems that I'm referring to? What are these problems that constitute the subject matter of the study of politics? What are the questions that political scientists try to answer? Such a list may be long, but not infinitely so. Among the oldest and still most fundamental questions are: what is justice? What are the goals of a decent society? How should a citizen be educated? Why should I obey the law, and what are the limits, if any, to my obligation? What constitutes the ground of human dignity? Is it freedom? Is it virtue? Is it love, is it friendship? And of course, the all important question, even though political philosophers and political scientists rarely pronounce it, namely, quid sit deus, what is God? Does he exist? And what does that imply for our obligations as human beings and citizens? Those are some of the most basic and fundamental problems of the study of politics, but you might say, where does one enter this debate? Which questions and which thinkers should one pick up for oneself? Perhaps the oldest and most fundamental question that I wish to examine in the course of this semester is the question: what is a regime? What are regimes? What are regime politics? The term "regime" is a familiar one. We often hear today about shaping regimes or about changing regimes, but what is a regime? How many kinds are there? How are they defined? What holds them together, and what causes them to fall apart? Is there a single best regime? Those are the questions I want us to consider. The concept of the regime is perhaps the oldest and most fundamental of political ideas. It goes back to Plato and even before him. In fact, the title of the book that you will be reading part of for this semester, Plato's Republic, is actually a translation of the Greek word politea that means constitution or regime. The Republic is a book about the regime and all later political philosophy is a series of footnotes to Plato, and that means that it must provide a series of variations, so to speak, on Plato's conception of the best regime. But what is a regime? Broadly speaking, a regime indicates a form of government, whether it is ruled by the one, a few, the many, or as more common, some mixture, a combination of these three ruling powers. The regime is defined in the first instance by how people are governed and how public offices are distributed by election, by birth, by lot, by outstanding personal qualities and achievements, and what constitutes a people's rights and responsibilities. The regime again refers above all to a form of government. The political world does not present itself as simply an infinite variety of different shapes. It is structured and ordered into a few basic regime types. In this, I take it to be one of the most important propositions and insights of political science. Right? So far? But there is a corollary to this insight. The regime is always something particular. It stands in a relation of opposition to other regime types, and as a consequence the possibility of conflict, of tension, and war is built in to the very structure of politics. Regimes are necessarily partisan, that is to say they instill certain loyalties and passions in the same way that one may feel partisanship to the New York Yankees or the Boston Red Sox, or to Yale over all rival colleges and institutions, right? Fierce loyalty, partisanship: it is inseparable from the character of regime politics. These passionate attachments are not merely something that take place, you might, say between different regimes, but even within them, as different parties and groups with loyalties and attachments contend for power, for honor, and for interest. Henry Adams once cynically reflected that politics is simply the "organization of hatreds," and there is more than a grain of truth to this, right, although he did not say that it was also an attempt to channel and redirect those hatreds and animosities towards something like a common good. This raises the question whether it is possible to transform politics, to replace enmity and factional conflict with friendship, to replace conflict with harmony? Today it is the hope of many people, both here and abroad, that we might even overcome, might even transcend the basic structure of regime politics altogether and organize our world around global norms of justice and international law. Is such a thing possible? It can't be ruled out, but such a world, I would note--let's just say a world administered by international courts of law, by judges and judicial tribunals--would no longer be a political world. Politics only takes place within the context of the particular. It is only possible within the structure of the regime itself. But a regime is more than simply a set of formal structures and institutions, okay? It consists of the entire way of life, the moral and religious practices, the habits, customs, and sentiments that make a people what they are. The regime constitutes an ethos, that is to say a distinctive character, that nurtures distinctive human types. Every regime shapes a common character, a common character type with distinctive traits and qualities. So the study of regime politics is in part a study of the distinctive national character types that constitutes a citizen body. To take an example of what I mean, when Tocqueville studied the American regime or the democratic regime, properly speaking, in Democracy in America, he started first with our formal political institutions as enumerated in the Constitution, such things as the separation of powers, the division between state and federal government and so on, but then went on to look at such informal practices as American manners and morals, our tendency to form small civic associations, our peculiar moralism and religious life, our defensiveness about democracy and so on. All of these intellectual and moral customs and habits helped to constitute the democratic regime. And this regime--in this sense the regime describes the character or tone of a society. What a society finds most praiseworthy, what it looks up to, okay? You can't understand a regime unless you understand, so to speak, what it stands for, what a people stand for, what they look up to as well as its, again, its structure of institutions and rights and privileges. This raises a further set of questions that we will consider over the term. How are regimes founded, the founding of regimes? What brings them into being and sustains them over time? For thinkers like Tocqueville, for example, regimes are embedded in the deep structures of human history that have determined over long centuries the shape of our political institutions and the way we think about them. Yet other voices within the tradition--Plato, Machiavelli, Rousseau come to mind--believed that regimes can be self-consciously founded through deliberate acts of great statesmen or founding fathers as we might call them. These statesmen--Machiavelli for example refers to Romulus, Moses, Cyrus, as the founders that he looks to; we might think of men like Washington, Jefferson, Adams and the like--are shapers of peoples and institutions. The very first of the Federalist Papers by Alexander Hamilton even begins by posing this question in the starkest terms. "It has been frequently remarked," Hamilton writes, "that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force." There we see Hamilton asking the basic question about the founding of political institutions: are they created, as he puts it, by "reflection and choice," that is to say by a deliberate act of statecraft and conscious human intelligence, or are regimes always the product of accident, circumstance, custom, and history? But the idea that regimes may be created or founded by a set of deliberate acts raises a further question that we will study, and is inseparable from the study of regimes. N'est pas? Who is a statesman? What is a statesman? Again, one of the oldest questions of political science, very rarely asked by the political science of today that is very skeptical of the language of statesmanship. In its oldest sense, political science simply was a science of statecraft. It was addressed to statesman or potential statesmen charged with steering the ship of state. What are the qualities necessary for sound statesmanship? How does statecraft differ from other kinds of activities? Must a good statesman, as Plato believed for example, be a philosopher versed in poetry, mathematics, and metaphysics? Or is statesmanship, as Aristotle believed, a purely practical skill requiring judgment based on deliberation and experience? Is a streak of cruelty and a willingness to act immorally necessary for statecraft, as Machiavelli infamously argued? Must the statesman be capable of literally transforming human nature, as Rousseau maintains, or is the sovereign a more or less faceless bureaucrat in manner of a modern CEO, as, for example, someone like Hobbes seems to have believed? All of our texts that we will read--the Republic, the Politics, the Prince, the Social Contract--have different views on the qualities of statecraft and what are those qualities necessary to found and maintain states that we will be considering. All of this, in a way, is another way of saying, or at least implying, okay, that political philosophy is an imminently practical discipline, a practical field. Its purpose is not simply contemplation, its purpose is not reflection alone: it is advice giving. None of the people we will study this semester were cloistered scholars detached from the world, although this is a very common prejudice against political philosophy, that it is somehow uniquely sort of "pie in the sky" and detached from the world. But the great thinkers were very far from being just, so to speak, detached intellectuals. Plato undertook three long and dangerous voyages to Sicily in order to advise the King Dionysius. Aristotle famously was a tutor of Alexander the Great. Machiavelli spent a large part of his career in the foreign service of his native Florence, and wrote as an advisor to the Medici. Hobbes was the tutor to a royal household who followed the King into exile during the English Civil War. And Locke was associated with the Shaftsbury Circle who also was forced into exile after being accused of plotting against the English King. Rousseau had no official political connections, but he signed his name always Jean Jacques Rousseau, "citizen of Geneva," and was approached to write constitutions for Poland and for the island of Corsica. And Tocqueville was a member of the French National Assembly whose experience of American democracy deeply affected the way he saw the future of Europe. So the great political thinkers were typically engaged in the politics of their times and help in that way to provide us, okay, with models for how we might think about ours. But this goes in a slightly different direction as well. Not only is this study of the regime, as we've seen, as I've just tried to indicate, rooted in, in many ways, the practical experience of the thinkers we'll be looking at; but the study of regime politics either implicitly or explicitly raises a question that goes beyond the boundary of any given society. A regime, as I've said, constitutes a people's way of life, what they believe makes their life worth living, or to put it again slightly differently, what a people stand for. Although we are most familiar with the character of a modern democratic regime such as ours, the study of political philosophy is in many ways a kind of immersion into what we might call today comparative politics; that is to say it opens up to us the variety of regimes, each with its own distinctive set of claims or principles, each vying and potentially in conflict with all the others, okay? Underlying this cacophony of regimes is the question always, which of these regimes is best? What has or ought to have a claim on our loyalty and rational consent? Political philosophy is always guided by the question of the best regime. But what is the best regime? Even to raise such a question seems to pose insuperable obstacles. Isn't that a completely subjective judgment, what one thinks is the best regime? How could one begin such a study? Is the best regime, as the ancients tended to believe, Plato, Aristotle, and others, is it an aristocratic republic in which only the few best habitually rule; or is the best regime as the moderns believe, a democratic republic where in principle political office is open to all by virtue of their membership in society alone? Will the best regime be a small closed society that through generations has made a supreme sacrifice towards self-perfection? Think of that. Or will the best regime be a large cosmopolitan order embracing all human beings, perhaps even a kind of universal League of Nations consisting of all free and equal men and women? Whatever form the best regime takes, however, it will always favor a certain kind of human being with a certain set of character traits. Is that type the common man, is it found in democracies; those of acquired taste and money, as in aristocracies; the warrior; or even the priest, as in theocracies? No, no question that I can think of can be more fundamental. And this finally raises the question of the relation between the best regime or the good regime, and what we could say are actually existing regimes, regimes that we are all familiar with. What function does the best regime play in political science? How does it guide our actions here and now? This issue received a kind of classic formulation in Aristotle's distinction of what he called the good human being and the good citizen. For the good citizen--we'll read this chapter later on in the Politics--for the good citizen you could say patriotism is enough, to uphold and defend the laws of your own country simply because they are your own is both necessary and sufficient. Such a view of citizen virtue runs into the obvious objection that the good citizen of one regime will be at odds with the good citizen of another: a good citizen of contemporary Iran will not be the same as the good citizen of contemporary America. But the good citizen, Aristotle goes on to say, is not the same as the good human being, right? Where the good citizen is relative to the regime, you might say regime-specific, the good human being, so he believes, is good everywhere. The good human being loves what is good simply, not because it is his own, but because it is good. Some sense of this was demonstrated in Abraham Lincoln's judgment about Henry Clay, an early idol of Lincoln's. Lincoln wrote of Clay, "He loved his country," he said, "partly because it was his own country"--partly because it was his own country--;"but mainly because it was a free country." His point, I think, is that Clay exhibited, at least on Lincoln's telling, something of the philosopher, what he loved was an idea, the idea of freedom. That idea was not the property of one particular country, but it was constitutive of any good society. The good human being, it would seem, would be a philosopher, or at least would have something philosophical about him or her, and who may only be fully at home in the best regime. But of course the best regime lacks actuality. We all know that. It has never existed. The best regime embodies a supreme paradox, it would seem. It is superior in some ways to all actual regimes, but it has no concrete existence anywhere. This makes it difficult, you could say and this is Aristotle's point, I think, this makes it difficult for the philosopher to be a good citizen of any actual regime. Philosophy will never feel fully or truly at home in any particular society. The philosopher can never be truly loyal to anyone or anything but what is best. Think of that: it raises a question about issues of love, loyalty, and friendship. This tension, of course, between the best regime and any actual regime is the space that makes political philosophy possible. In the best regime, if we were to inhabit such, political philosophy would be unnecessary or redundant. It would wither away. Political philosophy exists and only exists in that... call it "zone of indeterminacy" between the "is" and the "ought," between the actual and the ideal. This is why political philosophy is always and necessarily a potentially disturbing undertaking. Those who embark on the quest for knowledge of the best regime may not return the same people that they were before. You may return with very different loyalties and allegiances than you had in the beginning. But there is some compensation for this, I think. The ancients had a beautiful word, or at least the Greeks had a beautiful word, for this quest, for this desire for knowledge of the best regime. They called it eros, or love, right? The quest for knowledge of the best regime must necessarily be accompanied, sustained, and elevated by eros. You may not have realized it when you walked in to this class today, but the study of political philosophy may be the highest tribute we pay to love. Think of that. And while you're thinking about it you can start reading Plato's Apology for Socrates which we will discuss for class on Wednesday. Okay? It's nice to see you back, and have a very good but thoughtful September 11^(th).
Introduction_to_Political_Philosophy_with_Steven_B_Smith
14_The_Sovereign_State_Hobbes_Leviathan.txt
Professor Steven Smith: Okay, good morning. I'm going to show another movie today but not until a little bit later in the class. We'll get it. We'll get there. Don't worry! It doesn't fit until the last part of the class. But today, I want to talk about sovereignty. There are two great concepts that come out of Hobbes that you have to remember. One is the state of nature and the other is sovereignty. I spoke a bit about the first one yesterday or Monday rather. Today, I want to talk about Hobbes' theory of the sovereign state, the creation of the sovereign. Hobbes refers to the sovereign as a mortal god, as his answer to the problems of the state of nature, the state, the condition of life being solitary, poor, nasty, brutish and short. And it is only the creation of the sovereign for Hobbes, endowed or possessed with absolute power, that is sufficient to put an end to the condition of perpetual uncertainty, anxiety and unrest that is the case of the natural condition. Let me talk for a while about some of the formal features of Hobbes' sovereign power, of the Hobbesian state. In the first place, what I want to impress upon you is that the sovereign is for Hobbes less a person than it is or he is an office. The sovereign is described by Hobbes as an artificial person by which he means the sovereign is the creation of the contract or the covenant that brought this office into being. The sovereign does not exist by nature but rather, Hobbes tells us again, the sovereignty is the product of art or science. It is the product, the creation of the people or of what we might call, in Jeffersonian language, it is the product of the consent of the governed. The sovereign and, again, this is crucial, is for Hobbes, the representative of the people. He is the sovereign representative. It is the people who endow the sovereign with the authority to represent them on their behalf. And, in that respect, Hobbes' sovereign has many of the features or characteristics that we come to associate with what we call modern executive power or executive authority. When Louis XIV of France famously said L'état c'est moi. "I am the state," he was expressing a peculiarly pre-modern in that way conception of the state; that is to say, he regarded the state as in some ways his personal property. "I am the state. The state am I." But this is very different from Hobbes' sovereign. The state for Hobbes is not the possession of the sovereign. Rather, the sovereign does not own the state. He is appointed or authorized to secure for the people the, in many ways, limited ends of peace and security. He has much the same function and to some degree much of the same personality as what we would call a modern day CEO, that is to say there is a kind of anonymity and impersonality about the sovereign. I mean, unless you're in the Yale entrepreneurial society who can name the CEOs of many companies? And the answer is you probably can't. They are for the most part relatively anonymous individuals unless, you know, they get into trouble like Ken Lay or someone like that or do something amazing like Bill Gates. For the most part, they are rather impersonal and anonymous and that is in many ways the characteristic of Hobbes' sovereign. Hobbes' theory of the sovereign, interestingly, contains within itself elements of both secular absolutism and, in some ways, modern liberalism and it is the tension between these two that I want to bring out in my discussion here. The power of the sovereign, Hobbes continually insists, must be unlimited. Yet, at the same time, he tells us that the sovereign is the creation of the people whom he represents or it represents. Although Hobbes is widely taken to be a defender of monarchical absolutism, you will note, in your readings, that he displays a kind of studied neutrality over actually what form the sovereign should take. He only insists that sovereign power remain absolute and undivided whether it belongs to a single person, a few, or the many. And among the powers that the sovereign, he insists, must control are, for example, laws concerning property, the right of declaring war and peace, what we would call foreign policy, rules of justice concerning life and death, which is to say criminal law, and, of course, the right to determine what books and ideas are permissible, that is to say the right of censorship. In a sense, the core of Hobbes' theory of sovereignty can be boiled down to the statement that the sovereign and only the sovereign is the source of law. The law is what the sovereign says it is. Does that sound in any way familiar from what we have read this term? Anyone? Sound familiar? Thrasymachus? Do you remember that name, Book I of the Republic? Justice is what the stronger say it is. Hobbes tells us that the law is what the sovereign commands. This is sometimes known as the doctrine of legal positivism, which is to say that law is the command of the sovereign, a sort of command theory of law. And, again, that seems to point back to Thrasymachus' point of view in the first book of the Republic. There is for Hobbes, as for Thrasymachus, no higher court of appeal than the will or the word of the sovereign, no transcendent law, no divine law, no source of authority outside sovereign command. And sovereign is appointed for Hobbes to be much like an umpire in a baseball or a football game, to set the rules of the game. But the Hobbesian sovereign, unlike umpires, are not just the enforcers of the rules or the interpreters of the rules, the sovereign is also the creator, the shaper and maker of the rules. And Hobbes draws from this the startling conclusion, in many ways the infamous conclusion that the sovereign can never act unjustly. The sovereign can never act unjustly, why? Because the sovereign is the source of law and the sovereign is the source of the rules of justice. Therefore, Hobbes concludes, he can never act unjustly. And he supports this example by a deeply perverse and amusing, I have to say, reading from a biblical story, do you remember this? He refers to the story of David and Uriah. Everybody will remember that story from Sunday school or from Hebrew school or whatever. Does anyone remember that David was the king at that time? He was the king of Israel and he coveted Uriah's wife Bathsheba. He wanted to sleep with Bathsheba, so what did he do? He had Uriah killed so he could sleep with her. And Hobbes reasons from this story that while David's action may have sinned against God, he did no injustice to Uriah, imagine that. I think Uriah might have had a different point of view about this. He did no injustice to Uriah because, as the lawful sovereign, he could do any, not just anything he liked but whatever he did was set by the rules of the law. And when Hobbes tells that story, which he mentions a couple of times in the book, one can only imagine he must have had a kind of wry grin on his face when he wrote that out. In fact, next semester I'm teaching an entire course devoted to Hobbes' critique of religion in which this will, among other things, figure prominently. But Hobbes' teaching about law is, in some ways, less Draconian than it might first appear. He makes clear that law is what the sovereign says it is. There can be no such thing as an unjust law, he infers, again, because the sovereign is the source of all justice. But he does distinguish, he tells us, between a just law and a good law. All laws are by definition just, he tells us, but it doesn't follow that all laws are by definition good. "A good law," he says in chapter 30, "is that which is needful for the good of the people." A good law is needful for the good of the people. But then one asks, what are the criteria by which we determine the good of the people? How is this determined? And Hobbes makes clear that the sovereign is not invested with the authority to exercise a kind of absolute control over everything that people do. The purpose of law, Hobbes tell us, is not so much to control but to facilitate. Consider just the following passage from chapter 30, section 21. Hobbes writes: "For the use of laws, which are but rules authorized," he says, "is not to bind the people from all voluntary actions. It is not to bind them from voluntary actions but to direct and keep them in such motion as not to hurt themselves by their own impetuous desires, rashness or indiscretion as hedges are set not to stop travelers but to keep them on their way." This is the force or purpose of law to set rules, to keep people, as he puts it, on their way, a law that is intended simply to constrain and control for its own sake, Hobbes says, cannot be a good law. The purpose of a good law is to facilitate human agency in some ways. And I think, again, that too is central to Hobbes' theory of the sovereign. Its purpose is to facilitate, not simply to control and inhibit. But the power to control or the power of law for Hobbes also very much applies and here is one of his most controversial doctrines. It must certainly apply to matters of opinion to what we would call today First Amendment issues. This is something that Hobbes insists upon. "For the actions of men," he says, "proceed from their opinions. Actions proceed from opinions. And in the well governing of opinions consisteth the well governing of men's actions." So, if we are going to govern or regulate human behavior, we have to begin by regulating opinion. And it follows from this, Hobbes believes, that the sovereign has the right to decide what opinions, what books, what ideas are conducive to peace and which ones aim simply to stir up war and discontent? And these comments of Hobbes' about the sovereign's power to control opinions are directed at two principal institutions, the Church and, guess what the other one is, the university. Both of these for Hobbes he considers to be locus, the focus of or centers of seditious opinion that require to remain under sovereign control. By the churches, Hobbes is speaking of the reformed church but, in particular, he is concerned with those radical puritan sects of the type that later came and founded America, these radical sects that elevate matters of conscience and private belief over and above the law, that is to say arrogating to themselves, to the rights of conscience and the private belief, the powers to judge the sovereign. It was these dissenting Protestants, it was these dissenting sects, that formed the rank and file of Cromwell's armies during the Civil War in England. They formed the rank and file of the republican armies in England against the rule of the king. And, Hobbes tell us he would banish all doctrines that profess to make the individual or the sect, more importantly in some ways the sect, the judge of the sovereign. It is only in the state of nature, he tells us, that individuals have the right to determine just and unjust, right and wrong for themselves. Once we enter society, once we engage or conclude the social compact, we transfer our power to do this to the sovereign to determine these matters for us. And just as important as the radical churches and the reformed sects is for Hobbes the university and its curriculum. In particular, Hobbes faults the universities for teaching what, for teaching the radical doctrines of Aristotleanism in the seventeenth century. Aristotle in this period was the source of modern republican ideas, ideas about self government, ideas about in some ways what we might call direct democracy or participatory democracy, people who believe that the only legitimate form of government is one where Aristotle says citizens take turns ruling and being ruled in turn. It was, above all, the influence of the classics, Aristotle and Cicero in particular, that Hobbes regards as an important cause for the recent civil war and the regicide of Charles I. Consider the following passage that he writes: "As to rebellion against monarchy, one of the most frequent causes is the reading of the books of policy and history of the ancient Greeks and Romans. Reading of those books leads people to rebel against monarchy, for which young men like yourselves," he says, or young women too, "for which young men and all others that are unprovided by the antidote of solid reason," who are susceptible that is to reading these stories and reading these books, "receive a strong and delightful impression of the great exploits of war." "From reading of such books," Hobbes continues, "men have undertaken to kill their kings because the Greek and Latin writers in their books and discourses of policy make it lawful and laudable for any man to do so provided before he do it he call him a tyrant." That's what you learn, Hobbes believes, from the reading of Aristotle and the Greeks and Romans, regicide, that the only legitimate form of government is a republic and that it is a lawful and even it's your duty to kill your king. Of course, before doing so, he says, "you must call him first a tyrant." It's a wonderful passage. And this is so interesting, I think, not only because of its humor and Hobbes' in many ways characteristic exaggerations, but because it shows how much emphasis Hobbes puts on the reform of opinion, the reform of ideas, in many ways like Machiavelli and like Plato too before him, Hobbes regards himself as an educator of princes, an educator and a transformer, a reformer of ideas. There is a kind of internal irony here I think because Hobbes sometimes writes as if, as we've seen, as if human beings are nothing more than complex machines that mechanically obey the laws of attraction and repulsion. But he also obviously writes that we are beings with will and purpose who are uniquely guided by opinions, ideas, and doctrines and it is in many ways the first business of the sovereign to act as a moral reformer of ideas. Hobbes realizes this is a difficult and uphill task that he has set for himself. And, in a rare moment of sort of personal self-reflection or self-reference, he notes somewhat drolly that the novelty of his ideas will make it difficult for them to find an audience. "I am at the point of believing, he says, "that my labor will be as useless as the commonwealth of Plato," he says in a moment of sort of uncharacteristic despair, "will be as useless as the commonwealth of Plato." "For Plato" he says "also is of the opinion that it is impossible for the disorders of the state ever to be taken away until sovereigns be philosophers." And while, in many ways, initially despairing of the possibility of finding a sort of friendly reception or audience for his work, Hobbes then goes on in a more optimistic note to observe that his book is considerably simpler and easier to read than Plato's. Again, you might have a discussion about that over which is the easier one. But, Hobbes believes it is simpler and easier and therefore more likely to catch the ear of a sympathetic prince. "I recover some hope," he says. "I recover some hope that one time or other this writing of mine may fall into the hands of a sovereign who will consider it for himself, for it is short and I think clear." Well, we might question that. He says it's a short book and "I think clear" he writes. Well, it's complex and long. But nevertheless, perhaps hoping that his advertising it in this way will gain the ear of a sovereign and that "without the help" he continues, "of any interested or envious interpreter and by the exercise of entire sovereignty in protecting the public teaching of it convert this truth of speculation into the utility of practice," the very end of chapter 31, "will convert this truth of speculation into the utility of practice." So, Hobbes clearly believes or thinks that this will be a useful book for a sovereign to read and hoping it will gain the ear of a sympathetic sovereign or potential sovereign. Hobbes may, I think, overestimate or maybe I really should say underestimate the difficulty of the book but he returns to this again at the very end of Leviathan. "The universities" he says there, where he talks again a little bit about the audience for the book, "the universities," he says, " are the fountains of civil and moral doctrine. The universities are the fountains of civil and moral doctrine and have the obligation to teach the correct doctrine of rights and duties." And this means for Hobbes, first of all, adopting his book as the authoritative teaching on moral and political doctrine in the universities. This should be the required textbook of political science of political teaching in the universities to replace the older textbook, i.e. Aristotle's Politics. "Therefore," he says, "I think it may be profitably printed and more profitably taught in the universities," he confidently asserts. "The ideal audience for the book," he says "should be the preachers, the gentry, the lawyers, men of affairs, who drawing such water as they find from the book can use it," he says, "to sprinkle the same both from the pulpit and from their conversation upon the people." This is how he sees it, that it should be taught from the pulpit. It should be taught from the universities and from this conversation will be sprinkled upon the people. Hobbes' hope, like that of all the great political philosophers, was to be a kind of legislator for mankind. This again is a book with epic, epic ambition. Let me mention, I've emphasized in many ways the absolutist and authoritarian side of Hobbes' teaching. Let me talk about something that might sound oxymoronic. Let me call it for the moment Hobbesian liberalism. Hobbes enjoys describing the sovereign in the most absolute and extreme terms. Sovereign is to have supreme command over life and death, war and peace, what is to be taught and heard. And yet, in many ways, this Hobbesian sovereign aims to allow for ample room for individual liberty. And he even sets some limits on the legitimate use of sovereign power. For all of his tough talk, Hobbes takes justice and the rule of law very seriously, far more seriously than, for example, does Machiavelli. At one time in the book or at one point he maintains that a person cannot be made to accuse themselves without the assurance of pardon. You can't be forced to accuse yourself, what we could call the Fifth Amendment. You cannot be forced to accuse yourself. Similarly, he says, a wife or a parent cannot be coerced to accuse a loved one. And, in a similar point, he maintains that punishment can never be used as an instrument of revenge but only for what he calls the correction or what we would call the rehabilitation of the offender. Add to the above Hobbes' repeated insistence that law serve as an instrument for achieving social equality. In a chapter called, "Of the Office of the Sovereign Representative," Hobbes argues that justice be equally administered to all classes of people, rich, as well as poor, equal application of justice. He maintains further the titles of nobility are of value only for the benefits they confer on those of lesser rank or they're not useful at all. Equal justice, he tells us, requires equal taxation policy and he seems to be proposing a kind of consumption tax so that the rich, who consume more will have to pay their fair share. And he argues that indigent citizens, who are unable to provide for themselves, should not be forced to rely simply upon the private charity of individuals but should be maintained at public expense. He seems, in this way, to anticipate what we might think of as the modern welfare state that public assistance be provided, and the poor, not simply depend on the private goodwill of the others. But most importantly, I think, is to go back to the importance given to the individual in Hobbes' philosophy. Hobbes derives the very power of the sovereign from the natural right of each individual to do as they like in the state of nature. And it follows, I think, that the purpose of the sovereign is really to safeguard the natural right of each individual but to regulate this right so that it becomes consistent with the right of others and not simply again a kind of open war against all. What is significant about this, I think, is the priority that Hobbes gives to rights over duties. This, in many ways arguably, makes him the founding father or maybe we should say godfather of modern liberalism, the importance given to rights over duties, of the individual over in many ways the collective or common good. And I think this is expressed in Hobbes' novel and in many ways altogether unprecedented teaching about liberty in chapter 21, a very famous and important chapter. And here he distinguishes the liberty of, what he calls the liberty of the ancients, or what he doesn't exactly call but I'll call the liberty of the ancients and the liberty of the moderns. The ancients, he believes, operated with a defective understanding of human freedom. For the ancients, liberty meant living in a self-governing republic, living in a republic in which everyone again took some share in the ruling offices. Liberty, in other words, for the ancients was not just a property of the individual. It was an attribute of the regime of which one was a member. "The Athenians and the Romans," he says, "were free, that is they were free commonwealths, not that any particular man had the liberty to resist his own representative but that his representative had the liberty to resist or invade other people." In other words, liberty for the ancients was a collective good, the liberty, as he says, to resist or invade other people. It was a property of the commonwealth not of the individuals who inhabited it. But that sense of collective liberty, the freedom to resist or invade is, in fact, even opposed to the modern idea of liberty that Hobbes proposes. And by liberty Hobbes means something that sounds very familiar to us. Liberty means the absence of constraints or impediments to action. We are free to the extent that we can act in an unimpeded manner. And, it follows from him that political liberty means the freedom to act where the law is silent, as he says. Think of that, that where the law is silent, we have the freedom to do or not to do as we choose, very important to the way we think of liberty today in a modern and you might say liberal democracy. Hobbes' sovereign is more likely to allow citizens a zone of private liberty where they are free to act as they choose than in the classical republic where there is a kind of coerced participation in collective affairs or in political deliberation. And Hobbes here takes a dig at the defenders of the view, in his own day, that only the citizens of a republic can be free. "There is written," he says, "on the turrets of the city of Lucca…" and let me just ask before I continue this passage, anybody here in Pearson College? So, you will know the Dean Mr. Amerigo, yeah, your dean? Your dean is from the city of Lucca. Ask him if this is true when you see him. "There is written on the turrets of the city of Lucca in great characters, meaning great letters, that this day the word libertas, libertas is written on the walls of the turrets of the city of Lucca." Let's find out if that's still true. "Yet, no man," Hobbes continues, "can thence infer that a particular man has more liberty or immunity from the service of the commonwealth there than in Constantinople, the city of the Caliphs, the Caliphate. Living in a republic alone doesn't guarantee you more freedom. He says, freedom in that interesting passage, freedom here requires, as he puts it, immunity, "immunity from service." A regime is to be judged for Hobbes on how much private liberty, how much immunity it grants each of its citizens, an idea of individual liberty in many ways unknown and unprecedented in the modern world. And, in this respect, one can say that Hobbes has some connection to the creation of what we think of as the modern liberal state with its conception of private freedom as immunity from forced participation or forced participation in politics, very different from the ancients. So what does this all mean? Let me talk about what Hobbes has to say for us today, we who have in many ways become Hobbes' children. Hobbes gives us the definitive language of the modern state. Yet, he remains in many ways as contested for us as he was in his own time. For many today, Hobbes' conception of the Leviathan state is synonymous with anti-liberal absolutism. And yet for others, he opened the door to John Locke and the liberal theory of government. He taught the priority of rights over duties and he argued that the sovereign should serve the lowly interest or the lowly ends of providing peace and security, leaving it to individuals to determine for themselves how best to live their lives. Nonetheless, the liberty that subjects enjoy in Hobbes' plan falls in that area that he says the sovereign omits to regulate. Hobbes does not praise vigilance in defense of liberty and he denounces all efforts to resist the government. At best, one could say Hobbes is a kind of part-time liberal at best. But Hobbes is best when he is providing us with, in many ways, the moral and psychological language in which we think about government and the state. The state is a product of a psychological struggle between the contending passions of pride and fear. Fear, you will remember is associated with the desire for security, order, rationality, and peace. Pride is connected with the love of glory, honor, recognition and ambition. All the goods of civilization, Hobbes tells us, stem from our ability to control pride. The very title of the book comes from this wonderful biblical passage from Job where Leviathan is described as king of the children of pride. And the 19 laws of nature that Hobbes develops in his book really are there simply to enumerate or instruct us about the virtues of sociability and civility, especially directed against the sin of pride or hubris. So, the modern state, as we know it and still have it, in many ways grew out of the Hobbesian desire for security and the fear of death that can only be achieved at the expense of the desire for honor and glory. The Hobbesian state was intended to secure the conditions of life, even a highly civilized and cultivated life but one calculated in terms of self-interest and risk avoidance. Hobbes wants us to be fearful and to avoid dangerous courses of action that are inflamed by beliefs in honor, ambition, and the like. The Hobbesian fearful man is not likely to become someone who risks life for liberty, for honor, or for a cause. He's more likely to be someone who plays by the rules, avoids dangers, and bets on the sure thing. The Hobbesian citizen is not likely to be a risk taker, like a George Washington or an Andrew Carnegie. He is more likely to think like an actuary or a CPA or an insurance agent, always calculating the odds and finding ways to cover the damages. Later political theorists, like Jean-Jacques Rousseau and Nietzsche would even develop a word for Hobbesian man. They would call him somewhat contemptuously the bourgeois. But nevertheless, Hobbes was remarkably successful in converting us to his point of view. The type of individual he tried to create, careful, self-interested, risk averse, this has become the dominant ethos of our civilization, has it not? We even have entire disciplines like economics and psychology and I dare even say modern political science that reinforced this view of human nature. We have all become, whether we choose to admit it or not, Hobbesians. And yet at the same time, and here is the paradox I think, even a Hobbesian society cannot entirely exist without some individuals who are willing to risk life and limb either for the sake of honor, for self-respect or even just from the sheer joy that comes from risk itself. Remember my example on Monday of Ralph Esposito. Why do people become firemen, policemen, soldiers, freedom fighters, all activities that cannot be explained in terms of self-interest alone? Will not even a Hobbesian society again require fire departments? And where will people come from that, if they all follow the psychology of fear and self-interest that Hobbes wants to instill in us? Hobbes regards these passions, what Plato called by the word thumos. Hobbes regarded these passions in many ways as barbaric, as uncivilized and warlike and to some degree he was right. But even the Hobbesian state, Hobbes admits himself, the Hobbesian state lives in the midst of a Hobbesian world; that is to say, the world of international relations is for Hobbes simply the state of nature at large. The Hobbesian state will always exist in a world of hostile other states, unregulated by some kind of higher law. States stand to one another on the world stage as individuals do in the condition of nature; that is to say, potential enemies with no higher authority by which to adjudicate their conflicts. And in such a world, even a sovereign state will be endangered either from other states or from groups and individuals devoted to terror and destruction. Think of September 11,2001. This is a problem that a profound political scientist by the name of Pierre Hassner, a French student of international politics, has described as the dialectic of the bourgeois and the barbarian, a struggle that is to say between the modern Hobbesian state with its largely pacified and satisfied citizen bodies and those pre-modern states or maybe in some ways even post-modern states that are prepared to use the instruments of violence, terror and suicide bombings to achieve their goals. A Hobbesian state, paradoxically, still requires from its citizens, men and women prepared to fight to risk everything in the defense of their way of life. But the Hobbesian point, the paradox being that the Hobbesian bourgeois cannot entirely dispense with the barbarian, even in its own midst. Can Hobbes explain this paradox? He seems to avoid it. This problem has been brought out I think brilliantly in a recent book by a man named James Bowman, a book called Honor: a History. He wrote a history of honor. And here he points out that while affairs of honor, as they are quaintly called, have largely disappeared from advanced societies but honor still remains a consuming passion in many parts of the world today including for him most importantly the Middle East. Honor, in most societies, is thought to be not merely a personal quality, something like medieval chivalry but is above all group honor, the honor that surrounds the family, the extended clan, or the religious sect. An assault on one is an assault on all. This helps us to explain, for instance, why in so many cultures the concept of saving face is so important, even if to most modern Americans it seems relatively trivial. And one reason Bowman believes this is that we have such a difficult time in understanding other peoples and other cultures is that the very idea of defending one's honor has largely been devalued in the modern west. We tend to look at human behavior as a matter of providing rational incentives for human action while most people, in fact, are driven by a need for esteem and a desire to avoid humiliation. I remember, for example, during the Vietnam War when Richard Nixon spoke about achieving peace with honor, and this was largely mocked as a kind of ludicrous idea. Honor to so many of us sounds quaint, like an honor code or the Boy Scouts' code or something like that or something primitive, some kind of primitive ethic which we therefore don't really understand. We don't often see that it was in large parts Hobbes' efforts to discredit this kind of warrior virtue, this kind of virtue of honor that is so much a part of cultures that is also responsible for our current blindness. And that brings me to my final point about our Hobbesian civilization that conceals from us a very uncomfortable truth. Peace, the peace, security, and safety, what we might call our bourgeois freedoms that we enjoy, rest on the fact, on the uncomfortable fact, that there are still people who are willing to risk their lives for the sake of higher goals like honor or duty. Is that irrational for them to do so? Hobbes would believe it is. I think he would say yes. It doesn't make sense from a purely Hobbesian point of view that encourages us to think like rational actors interested mainly in safety and beating the odds. Hobbes, in many ways, finds himself in the position of the young military lawyer played in the following movie clip I'm going to show you. Professor Steven Smith: Okay, is the point made? The point is made. Then I will not even provide any further commentary. I only apologize that for some reason I couldn't get the picture.
MIT_18650_Statistics_for_Applications_Fall_2016
21_Generalized_Linear_Models.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare and ocw.mit.edu. PHILIPPE RIGOLLET: The chapter is a natural capstone chapter for this entire course. We'll see some of the things we've seen during maximum likelihood and some of the things we've seen during linear regression, some of the things we've seen in terms of the basic modeling that we've had before. We're not going to go back to much inference questions. It's really going to be about modeling. And in a way, generalized linear models, as the word says, are just a generalization of linear models. And they're actually extremely useful. They're often forgotten about and people just jump onto machine learning and sophisticated techniques. But those things do the job quite well. So let's see in what sense they are a generalization of the linear models. So remember, the linear model looked like this. We said that y was equal to x transpose beta plus epsilon, right? That was our linear regression model. And it's-- another way to say this is that if-- and let's assume that those were, say, Gaussian with mean 0 and identity covariance matrix. Then another way to say this is that the conditional distribution of y given x is equal to-- sorry, I a Gaussian with mean x transpose beta and variance-- well, we had a sigma squared, which I will forget as usual-- x transpose beta and then sigma squared. OK, so here, we just assumed that-- so what is regression is just saying I'm trying to explain why as a function of x. Given x, I'm assuming a distribution for the y. And this x is just going to be here to help me model what the mean of this Gaussian is, right? I mean, I could have something crazy. I could have something that looks like y given x is n0 x transpose beta. And then this could be some other thing which looks like, I don't know, some x transpose gamma squared times, I don't know, x, x transpose plus identity-- some crazy thing that depends on x here, right? And we deliberately assumed that all the thing that depends on x shows up in the mean, OK? And so what I have here is that y given x is a Gaussian with a mean that depends on x and covariance matrix sigma square identity. Now the linear model assumed a very specific form for the mean. It said I want the mean to be equal to x transpose beta which, remember, was the sum from, say, j equals 1 to p of beta j xj, right? It's where the xj's are the coordinates of x. But I could do something also more complicated, right? I could have something that looks like instead , replace this by, I don't know, sum of beta j log of x to the j divided by x to the j squared or something like this, right? I could do this as well. So there's two things that we have assumed. The first one is that when I look at the conditional distribution of y given x, x affects only the mean. I also assume that it was Gaussian and that it affects only the mean. And the mean is affected in a very specific way, which is linear in x, right? So this is essentially the things we're going to try to relax. So the first thing that we assume, the fact that y was Gaussian and had only its mean [INAUDIBLE] dependant no x is what's called the random component. It just says that the response variables, you know, it sort of makes sense to assume that they're Gaussian. And everything was essentially captured, right? So there's this property of Gaussians that if you tell me-- if the variance is known, all you need to tell me to understand exactly what the distribution of a Gaussian is, all you need to tell me is its expected value. All right, so that's this mu of x. And the second thing is that we have this link that says, well, I need to find a way to use my x's to explain this mu you and the link was exactly mu of x was equal to x transpose beta. Now we are talking about generalized linear models. So this part here where mu of x is of the form-- the way I want my beta, my x, to show up is linear, this will never be a question. In principle, I could add a third point, which is just question this part, the fact that mu of x is x transpose beta. I could have some more complicated, nonlinear function of x. And then we'll never do that because we're talking about generalized linear model. The only thing with generalize are the random component, the conditional distribution of y given x, and the link that just says, well, once you actually tell me that the only thing I need to figure out is the mean, I'm just going to slap it exactly these x transpose beta thing without any transformation of x transpose beta. So those are the two things. It will become clear what I mean. This sounds like a tautology, but let's just see how we could extend that. So what we're going to do in generalized linear models-- right, so when I talk about GLNs, the first thing I'm going to do with my x is turn it into some x transpose beta. And that's just the l part, right? I'm not going to be able to change. That's the way it works. I'm not going to do anything non-linear. But the two things I'm going to change is this random component, which is that y, which used to be some Gaussian with mean mu of x here in sigma squared-- so y given x, sorry-- this is going to become y given x follows some distribution. And I'm not going to allow any distribution. I want something that comes from the exponential family. Who knows what the exponential family of distribution is? This is not the same thing as the exponential distribution. It's a family of distributions. All right, so we'll see that. It's-- wow. What can that be? Oh yeah, that's actually [INAUDIBLE].. So-- I'm sorry? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm in presentation mode. That should not happen. OK, so hopefully, this is muted. So essentially, this is going to be a family of distributions. And what makes them exponential typically is that there's an exponential that shows up in the definition of the density, all right? We'll see that the Gaussian belongs to the exponential family. But they're slightly less expected ones because there's this crazy thing that a to the x is exponential x log a, which makes the potential show up without being there. So if there's an exponential of some power, it's going to show up. But it's more than that. So we'll actually come to this particular family of distribution. Why this particular family? Because in a way, everything we've done for the linear model with Gaussian is going to extend fairly naturally to this family. All right, and it actually also, because it encompasses pretty much everything, all the distributions we've discussed before. All right, so the second thing that I want to question-- right, so before, we just said, well, mu of x was directly equal to this thing. Mu of x was directly x transpose beta. So I knew I was going to have an x transpose beta and I said, well, I could do something with this x transpose beta before I used it to explain the expected value. But I'm actually taking it like that. Here, we're going to say, let's extend this to some function is equal to this thing. Now admittedly, this is not the most natural way to think about it. What you would probably feel more comfortable doing is write something like mu of x is a function. Let's call it f of x transpose beta. But here, I decide to call f g inverse. OK, let's just my g inverse. Yes. AUDIENCE: Is this different then just [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. I mean, what transformation you want to put on your x's? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh no, certainly not, right? I mean, if I give you-- if I force you to work with x1 plus x2, you cannot work with any function of x1 plus any function of x2, right? So this is different. All right, so-- yeah. The transformation would be just the simple part of your linear regression problem where you would take your exes, transform them, and then just apply another linear regression. This is genuinely new. Any other question? All right, so this function g and the reason why I sort of have to, like, stick to this slightly less natural way of defining it is because that's g that gets a name, not g inverse that gets a name. And the name of g is the link function. So if I want to give you a generalized linear model, I need to give you two ingredients. The first one is the random component, which is the distribution of y given x. And it can be anything in what's called the exponential family of distributions. So for example, I could say, y given x is Gaussian with mean mu x sigma identity. But I can also tell you y given x is gamma with shared parameter equal to alpha of x, OK? I could do some weird things like this. And the second thing is I need to give you a link function. And the link function is going to become very clear how you pick a link function. And the only reason that you actually pick a link function is because of compatibility. This mu of x, I call it mu because mu of x is always the conditional expectation of y given x, always, which means that let's think of y as being a Bernoulli random variable. Where does mu of x live? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: 0, 1, right? That's the expectation of a Bernoulli. It's just the probability that my coin flip gives me 1. So it's a number between 0 and 1. But this guy right here, if my x's are anything, right-- think of any body measurements plus [INAUDIBLE] linear combinations with arbitrarily large coefficients. This thing can be any real number. So the link function, what it's effectively going to do is make those two things compatible. It's going to take my number which, for example, is constrained to be between 0 and 1 and map it into the entire real line. If I have mu which is forced to be positive, for example, in an exponential distribution, the mean is positive, right? That's the, say, don't know, inter-arrival time for Poisson process. This thing is known to be positive for an exponential. I need to map something that's exponential to the entire real line. I need a function that takes something positive and [INAUDIBLE] everywhere. So we'll see. By the end of this chapter, you will have 100 ways of doing this, but there are some more traditional ones [INAUDIBLE]. So before we go any further, I gave you the example of a Bernoulli random variable. Let's see a few examples that actually fit there. Yes. AUDIENCE: Will it come up later [INAUDIBLE] already know why do we need the transformer [INAUDIBLE] why don't [INAUDIBLE] PHILIPPE RIGOLLET: Well actually, this will not come up later. It should be very clear from here because if I actually have a model, I just want it to be plausible, right? I mean, what happens if I suddenly decide that my-- so this is what's going to happen. You're going to have only data to fit this model. Let's say you actually forget about this thing here. You can always do this, right? You can always say I'm going to pretend my y's just happen to be the realizations of said Gaussians that happen to be 0 or 1 only. You can always, like, stuff that in some linear model, right? You will have some least squares estimated for beta. And it's going to be fine. For all the points that you see, it will definitely put some number that's actually between 0 and 1. So this is what your picture is going to look like. You're going to have a bunch of values for x. This is your y. And for different-- so these are the values of x that you will get. And for a y, you will see either a 0 or a 1, right? Right, that's what your Bernoulli dataset would look like with a one dimensional x. Now if you do least squares on this, you will find this. And for this guy, this line certainly takes values between 0 and 1. But let's say now you get an x here. You're going to actually start pretending that the probability it spits out one conditionally in x is like 1.2, and that's going to be weird. Any other questions? All right, so let's start with some examples. Right, I mean, you get so used to them through this course. So the first one is-- so all these things are taken. So there's a few books on generalizing, your models, generalize [INAUDIBLE] models. And there's tons of applications that you can see. Those are extremely versatile, and as soon as you want to do modeling to explain some y given x, you sort of need to do that if you want to go beyond linear models. So this was in the disease occurring rate. So you have a disease epidemic and you want to basically model the expected number of new cases given-- at a certain time, OK? So you have time that progresses for each of your reservation. Each of your reservation is a time stamp-- say, I don't know, 20th day. And your response is the number of new cases. And you're going to actually put your model directly on mu, right? When I looked at this, everything here was on mu itself, on the expected, right? Mu of x is always the expected-- the conditional expectation of y given x. right? So all I need to model is this expected value. So this mu I'm going to actually say-- so I look at some parameters, and it says, well, it increases exponentially. So I want to say I have some sort of exponential trend. I can parametrize that in several ways. And the two parameters I want to slap in is, like, some sort of gamma, which is just the coefficient. And then there's some rate delta that's in the exponential. So if I tell you it's exponential, that's a nice family of functions you might want to think about, OK? So here, mu of x, if I want to keep the notation, x is gamma exponential delta x, right? Except that here, my x are t1, t2, t3, et cetera. And I want to find what the parameters gamma and delta are because I want to be able to maybe compare different epidemics and see if they have the same parameter or maybe just do some prediction based on the data that I have without-- to extrapolate in the future. So here, clearly mu of x is not of the form x transpose beta, right? That's not x transpose beta at all. And it's actually not even a function of x transpose data, right? There's two parameters, gamma and delta, and it's not of the form. So here we have x, which is 1 and x, right? I have two parameters. So what I do here is that I say, well, first, let me transform mu in such a way that I can hope to see something that's linear. So if I transform mu, I'm going to have log of mu, which is log of this thing, right? So log of mu of x is equal, well, to log of gamma plus log of exponential delta x, which is delta x. And now this thing is actually linear in x. So I have that this guy is my first beta 1. And so that's beta 1 finds 1. And this guy is beta 2-- times, sorry that said beta 0-- times 1, and this guy is beta 1 times x. OK, so that looks like a linear model. I just have to change my parameters-- my parameters beta 1 becomes the log of gamma and beta 2 becomes delta itself. And the reason why we do this is because, well, the way we put those gamma and those delta was just so that we have some parametrization. It just so happens that if we want this to be linear, we need to just change the parametrization itself. This is going to have some effects. We know that it's going to have some effect in the fissure information. It's going to have a bunch of effect to change those things. But that's what needs to be done to have a generalized linear model. Now here, the function that I took to turn it into something that's linear is simple. It came directly from some natural thing I would do here, which is taking the log. And so the function g, the link that I take, is called the log link very creatively. And it's just the function that I apply to mu so that I see something that's linear and that looks like this. So now this only tells me how to deal with the link function. But I still have to deal with 0.1. And this, again, is just some modeling. Given some data, some random data, what distribution do you choose to explain the randomness? And this-- I mean, unless there's no choice, you know, it's just a matter of practice, right? I mean, why would it be Gaussian and not, you know, doubly exponential? This is-- there's matters of convenience that come into this, and there's just matter of experience that come into this. You know, I remember when you chat with engineers, they have a very good notion of what the distribution should be. They have y bold distributions. You know, they do optics and things like this. So there's some distributions that just come up but sometimes just have to work. Now here what do we have? The thing we're trying to measure, y-- as we said, so mu is the expectation, the conditional expectation, of y given x. But y is the number of new cases, right? Well it's a number of. And the first thing you should think of when you think about number of, if it were bounded above, you would think binomial, baby. But here, it's just a number. So you think Poisson. That's how insurers think. I have a number of, you know, claims per year. This is a Poisson distribution. And hopefully they can model the conditional distribution of the number of claims given everything that they actually ask you in the surveys that I hear you now fail in 15 minutes. All right, so now you have this Poisson distribution. And that's just the modeling assumption. There's no particular reason why you should do this except that, you know, that might be a good idea. And the expected value of your Poisson has to be this mu i, OK? At time i. Any question about this slide? OK, so let's switch to another example. Another example is the so-called pray capture rate. So here, what you're interested in is the rate capture of preys yi for a given prey. And you have xy, which is your explanation. And this is just the density of pray. So you're trying to explain the rate of captures of preys given the density of the prey, OK? And so you need to find some sort of relationship between the two. And here again, you talk to experts and what they tell you is that, well, it's going to be increasing, right? I mean, animals like predators are going to just eat more if there's more preys. But at some point, they're just going to level off because they're going to be [INAUDIBLE] full and they're going to stop capturing those prays. And you're just going to have some phenomenon that looks like this. So here is a curve that sort of makes sense, right? As your capture rate goes from 0 to 1, you're increasing, and then you see you have this like [INAUDIBLE] function that says, you know, at some point it levels up. OK, so here, one way I could-- I mean, there's again many ways I could just model a function that looks like this. But a simple one that has only two parameters is this one, where mu i is this a function of xi where I have some parameter alpha here and some parameter h here. OK, so there's clearly-- so this function, there's one that essentially tells you-- so this thing starts at 0 for sure. And essentially, alpha tells you how sharp this thing is, and h tells you at which points you end here. Well, it's not exactly what those values are equal to, but that tells you this. OK, so, you know-- simple, and-- well, no, OK. Sorry, that's actually alpha, which is the maximum capture. The rate and h represent the pre-density at which the capture weight is. So that's the half time. OK, so there's actual value [INAUDIBLE].. All right, so now I have this function. It's certainly not a function. There's no-- I don't see it as a function of x. So I need to find something that looks like a function of x, OK? So then here, there's no log. There's no-- well, I could actually take a log here. But I would have log of x and log of x plus h. So that would be weird. So what we propose to do here is to look, rather than looking at mu i, we look 1 over mu i. Right, and so since your function was mu i, when you take 1 over mu i, you get h plus xi divided by alpha xi, which is h over alpha times one over xi plus 1 over alpha. And now if I'm willing to make this transformation of variables and say, actually, I don't-- my x, whether it's the density of prey or the inverse density of prey, it really doesn't matter. I can always make this transformation when the data comes. Then I'm actually just going to think of this as being some linear function beta 0 plus beta 1, which is this guy, times 1 over xi. And now my new variable becomes 1 over xi. And now it's linear. And the transformation I had to take was this 1 over x, which is called the reciprocal link, OK? You can probably guess what the exponential link is going to be and things like this, all right? So we'll talk about other links that have slightly less obvious names. Now again, modeling, right? So this was the random component. This was the easy part. Now I need to just poor in some domain knowledge about how do I think this function, this y, which is which is the rate of capture of praise, I want to understand how this thing is actually changing what is the randomness of the thing around its mean. And you know, something that-- so that comes from this textbook. The standing deviation of capture rate might be approximately proportional to the mean rate. You need to find a distribution that actually has this property. And it turns out that this happens for gamma distributions, right? In gamma distributions, just like say, for Poisson distribution, the-- well, for Poisson, the variance and mean are of the same order. Here is the standard deviation that's of the same order as the [INAUDIBLE] for gammas. And it's a positive distribution as well. So here is a candidate. Now since we're sort of constrained to work under the exponential family of distributions, then you can just go through your list and just decide which one works best for you. All right, third example-- so here we have binary response. Here, essentially the binary response variable indicates the presence or absence of postoperative deforming for kyphosis on children. And here, rather than having one covariance which was before, in the first example, was time, in the second example was the density, here there's three ways that you measure on children. The first one is age of the child and the second one is the number of vertebrae involved in the operation. And the third one is the start of the range, right-- so where it is on the spine. OK, so the response variable here is, you know, did it work or not, right? I mean, that's very simple. And so here, it's nice because the random component is the easiest one. As I said, any random variable that takes only two outcomes must be a Bernoulli, right? So that's nice there's no modeling going on here. So you know that y given x is going to be Bernoulli, but of course, all your efforts are going to try to understand what the conditional mean of your Bernoulli, what the conditional probability of being 1 is going to be, OK? And so in particular-- so I'm just-- here, I'm spelling it out before we close those examples. I cannot say that mu of x is x transpose data for exactly this picture that I drew for you here, right? There's just no way here-- the goal of doing this is certainly to be able to extrapolate for yet unseen children whether this is something that we should be doing. And maybe the range of x is actually going to be slightly out. And so, OK I don't want to see that have a negative probability of outcome or a positive one-- sorry, or one that's lower than one. So I need to make this transformation. So what I need to do is to transform mu, which is, we know only a number. All we know is a number between 0 and 1. And we need to transform it in such a way that it maps the entire real line or reciprocally to say that-- or inversely, I should say-- that f of x transpose beta should be a number between 0 and 1. I need to find a function that takes any real number and maps it into 0 and 1. And we'll see that again, but you have an army of functions that do that for you. What are those functions? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm sorry? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Trait? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, I want them to be invertible, right? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I have an army of function. I'm not asking for one soldier in this army. I want the name of this army. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Well, they're not really invertible either, right? So they're actually in [INAUDIBLE] textbook. Because remember, statisticians don't know how to integrate functions, but they know how to turn a function into a Gaussian integral. So we know it integrates to 1 and things like this. Same thing here-- we don't know how to build functions that are invertible and map the entire real line to 0, 1, but there's all the cumulative distribution functions that do that for us. So I can you any of those guys, and that's what I'm going to be doing, actually. All right, so just to recap what I just said as we were speaking, so normal linear model is not appropriate for these examples if only because the response variable is not necessarily Gaussian and also because the linear model has to be-- the mean has to be transformed before I can actually apply a linear model for all these plausible nonlinear models that I actually came up with. OK, so the family we're going to go for is the exponential family of distributions. And we're going to be able to show-- so one of the nice part of this is to actually compute maximum likelihood estimaters for those right? In the linear model, maximum-- like, in the Gauss linear model, maximum likelihood was as nice as it gets, right? This actually was the least squares estimator. We had a close form. x transpose x inverse x transpose y, and that was it, OK? We had to just take one derivative. Here, we're going to have a generally concave likelihood. We're not going to be able to actually solve this thing directly in close form unless it's Gaussian, but we will have-- we'll see actually how this is not just a black box optimization of a concave function. We have a lot of properties of this concave function, and we will be able to show some iterative algorithms. We'll basically see how, when you opened the box of convex optimization, you will actually be able to see how things work and actually implement it using least squares. So each iteration of this iterative algorithm will essentially be a least squares, and that's actually quite [INAUDIBLE].. So, very demonstrative of statisticians being pretty ingenious so that they don't have to call in some statistical software but just can repeatedly call their least squares Oracle within a statistical software. OK, so what is the exponential family, right? I promised to do the exponential family. Before we go into this, let me just tell you something about exponential families, and what's the only thing to differentiate an exponential family from all possible distributions? An exponential family has two parameters, right? And those are not really parameters, but there's this theta parameter of my distribution, OK? So it's going to be indexed by some parameter. Here, I'm only talking about the distribution of, say, some random variable or some random vector, OK? So here in this slide, you see that the parameter theta that indexed those distribution is k dimensional and the space of the x's that I'm looking at-- so that should really be y, right? What I'm going to plug in here is the conditional distribution of y given x and theta is going to depend on x. But this really is the y. That's their distribution of the response variable. And so this is on q, right? So I'm going to assume that y takes-- q dimensional-- is q dimensional. Clearly soon, q is going to be equal to 1, but I can define those things generally. OK, so I have this. I have to tell you what this looks like. And let's assume that this is a probability density function. So this, right this notation, the fact that I just put my theta in subscript, is just for me to remember that this is the variable that indicates the random variable, and this is just the parameter. But I could just write it as a function of theta and x, right? This is just going to be-- right, if you were in calc, in multivariable calc, you would have two parameter of theta and x and you would need to give me a function. Now think of all-- think of x and theta as being one dimensional at this point. Think of all the functions that can be depending on theta and x. There's many of them. And in particular, there's many ways theta and x can interact. What the exponential family does for you is that it restricts the way these things can actually interact with each other. It's essentially saying the following. It's saying this is going to be of the form exponential-- so this exponential is really not much because I could put a log next to it. But what I want is that the way theta and x interact has to be of the form theta times x in an exponential, OK? So that's the simplest-- that's one of the ways you can think of them interacting is you just the product of the two. Now clearly, this is not a very rich family. So what I'm allowing myself is to just slap on some terms that depend only on theta and depend only on x. So let's just call this thing, I don't know, f of x, g of theta. OK, so here, I've restricted the way theta and x can interact. So I have something that depends only on x, something that depends only on theta. And here, I have this very specific interaction. And that's all that exponential families are doing for you, OK? So if we go back to this slide, this is much more general, right? if I want to go from theta and x in r to theta and x theta in r-- to theta in r k and x in rq, I cannot take the product of theta and x. I cannot even take the inner product between theta and x because they're not even of compatible dimensions. But what I can do is to first map my theta into something and map my x into something so that I actually end up having the same dimensions. And then I can take the inner product. That's the natural generalization of this simple product. OK, so what I have is-- right, so if I want to go from theta to x, when I'm going to first do is I'm going to take theta, eta of theta-- so let's say eta1 of theta to eta k of theta. And then I'm going to actually take x becomes t1 of x all the way to tk of x. And what I'm going to do is take the inner product-- so let's call this eta and let's call this t. And I'm going to take the inner product of eta and t, which is just the sum from j equal 1 to k of eta j of theta times tj of x. OK, so that's just a way to say I want this simple interaction but in higher dimension. The simplest way I can actually make those things happen is just by taking inner product. OK, and so now what it's telling me is that the distribution-- so I want the exponential times something that depends only on theta and something that depends only on x. And so what it tells me is that when I'm going to take p of theta x, it's just going to be something which is exponential times the sum from j equal 1 to k of eta j theta tj of x. And then I'm going to have a function that depends only-- so let me read it for now like c of theta and then a function that depends only on x. Let me call it h of x. And for convenience, there's no particular reason why I do that. I'm taking this function c of theta and I'm just actually pushing it in there. So I can write c of theta as exponential minus log of 1 over c of theta, right? And now I have exponential times exponential. So I push it in, and this thing actually looks like exponential sum from j equal 1 to k of eta j theta tj of x minus log 1 over c of theta times h of x. And this thing here, log 1 over c of theta, I call actually b of theta Because c, I called it c. But I can actually directly call this guy b, and I don't actually care about c itself. Now why don't I put back also h of x in there? Because h of x is really here to just-- how to put it-- OK, h of x and b of theta don't play the same role. B of theta in many ways is a normalizing constant, right? I want this density to integrate to 1. If I did not have this guy, I'm not guaranteed that this thing integrates to 1. But by tweaking this function b of theta or c of theta-- they're equivalent-- I can actually ensure that this thing integrates to 1. So b of theta is just a normalizing constant. H of x is something that's going to be funny for us. It's going to be something that allows us to be able to treat both discrete and continuous variables within the framework of exponential families. So for those that are familiar with this, this is essentially saying that that h of x is really just a change of measure. When I actually look at the density of p of theta-- this is with respect to some measure-- the fact that I just multiplied by a function of x just means that I'm not looking-- that this guy here without h of theta is not the density with respect to the original measure, but it's the density with respect to the distribution that has h as a density. That's all I'm saying, right? So I can first transform my x's and then take the density with respect to that. If you don't want to think about densities or measures, you don't have to. This is just the way-- this is just the definition. Is there any question about this definition? All right, so it looks complicated, but it's actually essentially the simplest way you could think about it. You want to be able to have x and theta interact and you just say, I want the interaction to be of the form exponential x times theta. And if they're higher dimensions, I'm going to take the exponential of the function of x inner product with a function of theta. All right, so I claimed since the beginning that the Gaussian was such an example. So let's just do it. So is the Gaussian of the-- is the interaction between theta and x in a Gaussian of the form in the product? And the answer is yes. Actually, whether I know or not what the variance is, OK? So let's start for the case where I actually do not know what the variance is. So here, I have x is n mu sigma squared. This is all one dimensional. And here, I'm going to assume that my parameter is both mu and sigma square. OK, so what I need to do is to have some function of mu, some function of stigma square, and take an inner product of some function of x and some other function of x. So I want to show that-- so p theta of x is what? Well, it's one over square root sigma 2 pi exponential minus x minus mu squared over 2 sigma squared, right? So that's just my Gaussian density. And I want to say that this thing here-- so clearly, the exponential shows up already. I want to show that this is something that looks like, you know, eta 1 of-- sorry, so that was-- yeah, eta 1 of, say, mu sigma squared. So I have only two of those guys, so I'm going to need only two etas, right? So I want it to be eta 1 of mu and sigma times t1 of x plus eta 2 mu 1 mu sigma squared times t2 of x, right? So I want to have something like that that shows up, and the only things that are left, I want them to depend either only on theta or only on x. So to find that out, we just need to expand. OK, so I'm going to first put everything into my exponential and expand this guy. So the first term here is going to be minus x squared over 2 sigma square. The second term is going to be minus mu squared over two sigma squared. And then the cross term is going to be plus x mu divided by sigma squared. And then I'm going to put this guy here. So I have a minus log sigma over 2 pi, OK? OK, is this-- so this term here contains an interaction between X and the parameters. This term here contains an interaction between X and the parameters. So let me try to write them in a way that I want. This guy only depends on the parameters, this guy only depends on the parameter. So I'm going to rearrange things. And so I claim that this is of the form x squared. Well, let's say-- do-- who's getting the minus? Eta, OK. So it's x squared times minus 1 over 2 sigma squared plus x times mu over sigma squared, right? So that's this term here. That's this term here. Now I need to get this guy here, and that's minus. So I'm going to write it like this-- minus, and now I have mu squared over 2 sigma squared plus log sigma square root 2 pi. And now this thing is definitely of the form t of x times-- did I call them the right way or not? Of course not. OK, so that's going to be t2 of x times eta 2 of x eta 2 of theta. This guy is going to be t1 of x times eta 1 of theta. All right, so just a function of theta times a function of x-- just a function of theta times a function of x. And the way combined is just by sending them. And this is going to be my d of theta. What is h of x? AUDIENCE: 1. PHILIPPE RIGOLLET: 1. There's one thing I can actually play with, and this is something you're going to have some three choices, right? This is not actually completely determined here is that-- for example, so when I write the log sigma square root 2 pi, this is just log of sigma plus log square root 2 pi. So I have two choices here. Either my b becomes this guy, or-- so either I have b of theta, which is mu squared over 2 sigma squared plus log sigma square root 2 pi and h of x is equal to 1, or I have that b of theta is mu square over 2 sigma squared plus log sigma. And h of x is equal to what? Well, I can just push this guy out, right? I can push it out of the exponential. And so it's just square root of 2 pi, which is a function of x, technically. I mean, it's a constant function of x, but it's a function. So you can see that it's not completely clear how you're going to do the trade off, right? So the constant terms can go either in b or in h. But you know, why bother with tracking down b and h when you can actually stuff everything into one and just call h one and call it a day? Right, so you can just forget about h. You know it's one and think about the right. H won't matter actually for estimation purposes or anything like this. All right, so that's basically everything that's written. When stigma square is known, what's happening is that this guy here is no longer a function of theta, right? Agreed? This is no longer a parameter. When sigma square is known, then theta is equal to mu only. There's no sigma square going on. So this-- everything depends on sigma square can be thought of as a constant. Think one. So in particular, this term here does not belong in the interaction between x and theta. It belongs to h, right? So if sigma is known, then this guy is only a function of h-- of x. So h of x becomes exponential x squared minus x squared over 2 sigma squared, right? That's just a function of x. Is that clear? So if you complete this computation, what you're going to get is that your new one parameter thing is that p theta x is not equal to exponential x times mu over sigma squared minus-- well, it's still the same thing. And then you have your h of x that comes out-- x squared over 2 sigma squared. OK, so that's my h of x. That's still my b of theta. And this is my t1 of x. And this is my eta one of theta. And remember, theta is just equal to mu in this case. So if I ask you prove that this distribution belongs to an exponential family, you just have to work it out. Typically, it's expanding what's in the exponential and see what's-- and just write it in this term and identify all the components, right? So here, notice those guys don't even get an index anymore because there's just one of them. So I wrote eta 1 and t1, but it's really just eta and t. Oh sorry, this guy also goes. This is also a constant, right? So it can actually just put sigma divided by sigma square root 2 pi. So h of x is what, actually? Is it the density of-- AUDIENCE: Standard [INAUDIBLE]. PHILIPPE RIGOLLET: It's not standard. It's centered. It has mean 0. But it variance sigma squared, right? But it's the density of a Gaussian. And this is what I meant when I said h of x is really just telling you with respect to which distribution, which measure you're taking the density. And so this thing here is really telling you the density of my Gaussian with mean mu is equal to-- is this with respect to a centered Gaussian is this guy, right? That's what it means. If this thing ends up being a density, it just means that now you just have a new measure, which is this density. So it's just saying that the density of the Gaussian with mean mu with respect to the Gaussian with mean 0 is just this [INAUDIBLE] here. All right, so let's move on. So here, as I said, you could actually do all these computations and forget about the fact that x is continuous. You can actually do it with PMFs and do it for x is discrete. This actually also tells you if you can actually get the same form for your density, which is of the form exponential times the product of the the interaction between theta and x is just taking this product, then a function only of theta and of function only of x, for the PMF, it also works. OK, so I claim that the Bernoulli belongs to this family. So the PMF of a Bernoulli-- we say parameter p is p to the x 1 minus p to the 1 minus x, right? Because we know so that's only for x equals 0 or 1. And the reason is because when x is equal to 0, this is 1 minus p. When x is equal to 1, this is minus 0. OK, we've seen that when we're looking at likelihoods for Bernoullis. OK, this is not clear this is going to look like this at all. But let's do it. OK, so what does this thing look like? Well, the first thing I want to do is to make an exponential show up. So what I'm going to write is I'm going to write p to the x as exponential x log p, right? And so I'm going to do that for the other one. So this thing here-- so I'm going to get exponential x log p plus 1 minus x log 1 minus p. So what I need to do is to collect my terms in x and my terms in whatever parameters I have, see here if theta is equal to p. So if I do this, what I end up having is equal to exponential-- so determine x is log p minus log 1 minus p. So that's x times log p over 1 minus p. And then the term that rest is just-- that stays is just 1 times log 1 minus p. But I want to see this as a minus something, right? It was minus b of theta. So I'm going to write it as minus-- well, I can just keep the plus, and I'm going to do-- and that's all [INAUDIBLE]. A-ha! Well, this is of the form exponential-- something that depends only on x times something that depends only on theta-- minus a function that depends only on theta. And then h of x is equal to 1 again. OK, so let's see. So I have t1 of x is equal to x. That's this guy. Eta 1 of theta is equal to log p1 minus p. And b of theta is equal to log 1 over 1 minus p, OK? And h of x is equal to 1, all right? You guys want to do Poisson, or do you want to have any homework? It's a dilemma because that's an easy homework versus no homework at all but maybe something more difficult. OK, who wants to do it now? Who does not want to raise their hand now? Who wants to raise their hand now? All right, so let's move on. I'll just do-- do you want to do the gammas instead in the homework? That's going to be fun. I'm not even going to propose to do the gammas. And so this is the gamma distribution. It's brilliantly called gamma because it has the gamma function just like the beta distribution had the beta function in there. They look very similar. One is defined over r plus, the positive real line. And remember, the beta was defined over the interval 0, 1. And it's of the form x to some power times exponential of minus x to some-- times something, right? So there's a function of polynomial [INAUDIBLE] x where the exponent depends on the parameter. And then there's the exponential minus x times something depends on the parameters. So this is going to also look like some function of x-- sorry, like some exponential distribution. Can somebody guess what is going to be t2 of x? Oh, those are the functions of x that show up in this product, right? Remember when we have this-- we just need to take some transformations of x so it looks linear in those things and not in x itself. Remember, we had x squared and x, for example, in the Gaussian case. I don't know if it's still there. Yeah, it's still there, right? t2 was x squared. What do you think x is going-- t2 of x here. So here's a hint. t1 is going to be x. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, [INAUDIBLE],, what is going to be t1? Yeah, you can-- this one is taken. This one is taken. What? Log x, right? Because this x to the a minus 1, I'm going to write that as exponential a minus 1 log x. So basically, eta 1 is going to be a minus 1. Eta 2 is going to be minus 1 over b-- well, actually the opposite. And then you're going to have-- but this is actually not too complicated. All right, then those parameters get names. a is the shape parameter, b is the scale parameter. It doesn't really matter. You have other things that are called the inverse gamma distribution, which has this form. The difference is that the parameter alpha shows negatively there and then the inverse Gaussian distribution. You know, just densities you can come up with and they just happened to fall in this family. And there's other ones that you can actually put in there that we've seen before. The chi-square is actually part of this family. The beta distribution is part of this family. The binomial distribution is part of this family. Well, that's easy because the Bernoulli was. The negative binomial, which is some stopping time-- the first time you hit a certain number of successes when you flip some Bernoulli coins. So you can check for all of those, and you will see that you can actually write them as part of the exponential family. So the main goal of this slide is to convince you that this is actually a pretty broad range of distributions because it basically includes everything we've seen but not anything there-- sorry, plus more, OK? Yeah. AUDIENCE: Is there any example of a distribution that comes up pretty often that's not in the exponential family? PHILIPPE RIGOLLET: Yeah, like uniform. AUDIENCE: Oh, OK, so maybe a bit more complicated than [INAUDIBLE]. Anything Anything that has a support that depends on the parameter is not going to fall-- is not going to fit in there. Right, and you can actually convince yourself why anything that has the support that does not-- that depends on the parameter is not going to be part of this guy. It's kind of a hard thing to-- in fact, you proved that it's not and you prove this rule. That's kind of a little difficult, but the way you can convince yourself is that remember, the only interaction between x and theta that I allowed was taking the product of those guys and then the exponential, right? If you have something that depends on some parameter-- let's say you're going to see something that looks like this. Right, for uniform, it looks like this. Well, this is not of the form exponential x times theta. There's an interaction between x and theta here, but it's actually certainly not of the form x exponential x times theta. So this is definitely not going to be part of the exponential family. And every time you start doing things like that, it's just not going to happen. Actually, to be fair, I'm not even sure that all these guys, when you allow them to have all their parameters free, are actually going to be part of this. For example-- the beta probably is, but I'm not actually entirely convinced. There's books on experiential families. All right, so let's go back. So here, we've put a lot of effort understanding how big, how much wider than the Gaussian distribution can we think of for the conditional distribution of our response y given x. So let's go back to the generalized linear models, right? So [INAUDIBLE] said, OK, the random component? y has to be part of some exponential family distribution-- check. We know what this means. So now I have to understand two things. I have to understand what is the expectation, right? Because that's actually what I model, right? I take the expectation, the conditional expectation, of y given x. So I need to understand given this guy, it would be nice if you had some simple rules that would tell me exactly what the expectation is rather than having to do it over and over again, right? If I told you, here's a Gaussian, compute the expectation, every time you had to use that would be slightly painful. So hopefully, this thing being simple enough-- we've actually selected a class that's simple enough so that we can have rules. Whereas as soon as they give you those parameters t1, t2, eta 1, eta 2, b and h, you can actually have some simple rules to compute the mean and variance and all those things. And so in particular, I'm interested in the mean, and I'm going to have to actually say, well, you know, this mean has to be mapped into the whole real line. So I can actually talk about modeling this function of the mean as x transpose beta. And we saw that for the [INAUDIBLE] dataset or whatever other data sets. You actually can-- you can actually do this using the log of the reciprocal or for the-- oh, actually, we didn't do it for the Bernoulli. We'll come to this. This is the most important one, and that's called a logit it or a logistic link. But before we go there, this was actually a very broad family, right? When I wrote this thing on the bottom board-- it's gone now, but when I wrote it in the first place, the only thing that I wrote is I wanted x times theta. Wouldn't it be nice if you have some distribution that was just x times theta, not some function of x times some function of theta? The functions seem to be here so that they actually make things a little-- so the functions were here so that I can actually put a lot of functions there. But first of all, if I actually decide to re-parametrize my problem, I can always assume-- if I'm one dimensional, I can always assume that eta 1 of theta becomes my new theta, right? So this thing-- here for example, I could say, well, this is actually the parameter of my Bernoulli. Let me call this guy theta, right? I could do that. Then I could say, well, here I have x that shows up here. And here since I'm talking about the response, I cannot really make any transformations. So here, I'm going to actually talk about a specific family for which this guy is not x square or square root of x or log of x or anything I want. I'm just going to actually look at distributions for which this is x. This exponential families are called a canonical exponential family. So in the canonical exponential family, what I have is that I have my x times theta. I'm going to allow myself some normalization factor phi, and we'll see, for example, that it's very convenient when I talk about the Gaussian, right? Because even if I know-- yeah, even if I know this guy, which I actually pull into my-- oh, that's over here, right? Right, I know sigma squared. But I don't want to change my parameter to be mu over sigma squared. It's kind of painful. So I just take mu, and I'm going to keep this guy as being this phi over there. And it's called the dispersion parameter from a clear analogy with the Gaussian, right? That's the variance and that's measuring dispersion. OK, so here, what I want is I'm going to think throughout this class-- so phi may be known or not. And depending-- when it's not known, this actually might turn into some exponential family or it might not. And the main reason is because this b of theta over phi is not necessarily a function of theta over phi, right? If I actually have phi unknown, then y theta over phi has to be-- this guy has to be my new parameter. And b might not be a function of this new parameter. OK, so in a way, it may or may not, but this is not really a concern that we're going to have because throughout this class, we're going to assume that phi is known, OK? Phi is going to be known all the time, which means that this is always an exponential family. And it's just the simplest one you could think of-- one dimensional parameter, one dimensional response, and I just have-- the product is just y times or, we used to call it x. Now I've switched to y, but y times theta divided by phi, OK? Should I write this or this is clear to everyone what this is? Let me write it somewhere so we actually keep track of it toward the [INAUDIBLE]. OK, so this is-- remember, we had all the distributions. And then here we had the exponential family. And now we have the canonical exponential family. It's actually much, much smaller. Well, actually, it's probably sort of a good picture. And what I have is that my density or my PMF is just exponential y times theta minus b of theta divided by phi. And I have plus phi of-- oh, yeah, plus phi of y phi, which means that this is really-- if phi is known, h of y is just exponential c of y phi, agreed? Actually, this is the reason why it's not necessarily a canonical family. It might not be that this depends only on y. It could depend on y and phi in some annoying way and I may not be able to break it. OK, but if phi is known, this is just a function that depends on y, agreed? In particular, I think you need-- I hope you can convince yourself that this is just a subcase of everything we've seen before. So for example, the Gaussian when the variance is known is indeed of this form, right? So we still have it on the board. So here is my y, right? So then let me write this as f theta of y. So every x is replaceable with y, blah, blah, blah. This is this guy. And now what I have is that this is going to be my phi. This is my parameter of theta. So I'm definitely of the form y times theta divided by phi. And then here I have a function b that depends only on theta over phi again. So b of theta is mu squared divided by 2. OK, then it's divided by 6 sigma square. And then I have this extra stuff. But I really don't care what it is for now. It's just something that depends only on y and known stuff. So it was just a function of y just like my h. I stuff everything in there. The b, though, this thing here, this is actually what's important because in the canonical family, if you think about it, when you know phi-- sorry-- right, this is just y times theta scaled by a known constant-- sorry, y times theta scaled by a known constant is the first term. The second term is b of theta scaled by some known constant. But b of theta is what's going to make the difference between the Gaussian and Bernoullis and gammas and betas-- this is all in this b of theta. b of theta contains everything that's idiosyncratic to this particular distribution. And so this is going to be important. And we will see that b of theta is going to capture information about the mean, about the variance, about likelihood, about everything. Should I go through this computation? I mean, it's the same. We've just done it, right? So maybe it's probably better if you can redo it on your own. All right, so the canonical exponential family also has other distributions, right? So there's the Gaussian and there's the Poisson and there's the Bernoulli. But the other ones may not be part of this, right? In particular, think about the gamma distribution. We had this-- log x was one of the things that showed up. I mean, I cannot get rid of this log x. I mean, that's part of it except if a is equal to 1 and I know it for sure, right? So if a is equal to 1, then I'm going to have a minus 1, which is equal to 0. So I'm going to have a minus 1 times log x, which is going to be just 0. So log x is going to vanish from here. But if a is equal to 1, then this distribution is actually much nicer, and it actually does not even deserve the name gamma. What is it if a is equal to 1? It's an exponential, right? Gamma 1 is equal to 1. x to the a minus 1 is equal to 1. b-- so I have exponential x over b divided by b. So 1 over b-- call it lambda. And this is just an exponential distribution. And so every time you're going to see something-- so all these guys that don't make it to this table, they could be part of those guys, but they're just more-- they're just to-- they just have another name in this thing. All right, so you could compute the value of theta for different values, right? So again, you still have some continuous or discrete ones. This is my b of theta. And I said this is actually really what captures my theta. This b is actually called cumulant generating function, OK? I don't have time. I could write five slides to explain to you, but it would just only tell you why it's called cumulant generating function. It's also known as the log of the moment generating function. And the way it's called cumulant generating function is because if I start taking successive derivatives and evaluating them at 0, I get the successive cumulance of this distribution, which are some transformation of the moments. AUDIENCE: What are you talking about again? PHILIPPE RIGOLLET: The function b. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: So this is just normalization. So this is just to tell you I can compute this, but I really don't care. And obviously I don't care about stuff that's complicated. This is actually cute, and this is what completes everything. And the rest is just like some general description. You only need to tell you that the range of y is 0 to infinity, right? And that is essentially telling me this is going to give me some hints as to which link function I should be using, right? Because the range of y tells me what the range of expectation of y is going to be. All right, so here, it tells me that the range of y is between 0 and 1. OK, so what I want to show you is that this captures a variety of different ranges that you can have. OK, so I'm going to want to go into the likelihood. And the likelihood I'm actually going to use to compute the expectations. But since I actually don't have time to do this now, let's just go quickly through this and give you spoiler alert to make sure that you all wake up on Thursday and really, really want to think about coming here immediately. All right, so the thing I'm going to want to do, as I said, is it would be nice if, at least for this canonical family, when I give you b, you would be able to say, oh, here is a simple computation of b that would actually give me the mean and the variance. The mean and the variance are also known as moments. b is called cumulant generating function. So it sounds like moments being related to cumulance, I might have a path to finding those, right? And it might involve taking derivatives of b, as we'll see. The way we're going to prove this by using this thing that we've used several times. So this property we use when we're computing, remember, the fisher information, right? We had two formulas for the fisher information. One was the expectation of the second derivative of the log likelihood, and one was negative expectation of the square-- sorry, expectation of the square, and the other one was negative the expectation of the second derivative, right? The log likelihood is concave, so this number is negative, this number is positive. And the way we did this is by just permuting some derivative and integral here. And there was just-- we used the fact that something that looked like this, right? The log likelihood is log of f theta. And when I take the derivative of this guy with respect to theta, then I have something that looks like the derivative divided by f theta. And if I start taking the integral against f theta of this thing, so the expectation of this thing, those things would cancel. And then I had just the integral of a derivative, which I would make a leap of faith and say that it's actually the derivative of the integral. But this was equal to 1. So this derivative was actually equal to 0. And so that's how you got that the expectation of the derivative of the log likelihood is equal to 0. And you do it once again and you get this guy. It's just some nice things that happen with the [INAUDIBLE] taking derivative of the log. We've done that, we'll do that again. But once you do this, you can actually apply it. And-- missing a parenthesis over there. So when you write the log likelihood, it's just log of an exponential. Huh, that's actually pretty nice. Just like the least squares came naturally, the least squares [INAUDIBLE] came naturally when we took the log likelihood of the Gaussians, we're going to have the same thing that happens when I take the log of the density. The exponential is going to go away, and then I'm going to use this formula. But this formula is going to actually give me an equation directly-- oh, that's where it was. So that's the one that's missing up there. And so the expectation minus this thing is going to be equal to 0, which tells me that the expectation is just the derivative. Right, so it's still a function of theta, but it's just a derivative of b. And the variance is just going to be the second derivative of b. But remember, this was some sort of a scaling, right? It's called the dispersion parameter. So if I had a Gaussian and the variance of the Gaussian did not depend on the sigma squared which I stuffed in this phi, that would be certainly weird. And it cannot depend only on mu, and so this will-- for the Gaussian, this is definitely going to be equal to 1. And this is just going to be equal to my variance. So this is just by taking the second derivative. So basically, the take-home message is that this function b captures-- by taking one derivative of the expectation and by taking two derivatives captures the variance. Another thing that's actually cool and we'll come back to this and I want to think about is if this second derivative is the variance, what can I say about this thing? What do I know about a variance? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, that's positive. So I know that this is positive. So what does that tell me? Positive? That's convex, right? A function that has positive second derivative is convex. So we're going to use that as well, all right? So yeah, I'll see you on Thursday. I have your homework.
MIT_18650_Statistics_for_Applications_Fall_2016
18_Bayesian_Statistics_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: So today, we're going to close this chapter, this short chapter, on Bayesian inference. Again, this was just an overview of what you can do in Bayesian inference. And last time, we started defining what's called Jeffreys priors. Right? So when you do Bayesian inference, you have to introduce a prior on your parameter. And we said that usually, it's something that encodes your domain knowledge about where the parameter could be. But there's also some principle way to do it, if you want to do Bayesian inference without really having to think about it. And for example, one of the natural priors were those non-informative priors, right? If you were on a compact set, it's a uniform prior of this set. If you're on an infinite set, you can still think of taking the [? 01s ?] prior. And that's called an [INAUDIBLE] That's always equal to 1. And that's an improper prior if you are an infinite set or proportional to one. And so another prior that you can think of, in the case where you have a Fisher information, which is well-defined, is something called Jefferys prior. And this prior is a prior which is proportional to square root of the determinant of the Fisher information matrix. And if you're in one dimension, it's basically proportional to a square root of the Fisher information coefficient, which we know, for example, is the asymptotic variance of the maximum likelihood estimator. And it turns out that it's basically. So square root of this thing is basically one over the standard deviation of the maximum likelihood estimator. And so you can compute this, right? So you can compute for the maximum likelihood estimator. We know that the variance is going to be p1 minus p in the Bernoulli statistical experiment. So you get this one over the square root of this thing. And for example, in the Gaussian setting, you actually have the Fisher information, even in the multi-variate one, is actually going to be something like the identity matrix. So this is proportional to 1. It's the improper prior that you get, in this case, OK? Meaning that, for the Gaussian setting, no place where you center your Gaussian is actually better than any other. All right. So we basically left on this slide, where we saw that Jeffreys prior satisfy a reparametrization [INAUDIBLE] invariant by transformation of your parameter, which is a desirable property. And the way, it says that, well, if I have my prior on theta, and then I suddenly decide that theta is not the parameter I want to use to parameterize my problem, actually what I want is phi of theta. So think, for example, as theta being the mean of a Gaussian, and phi of theta as being mean to the cube. OK? This is a one-to-one map phi, right? So for example, if I want to go from theta to theta cubed, and now I decide that this is the actual parameter that I want, well, then it means that, on this parameter, my original prior is going to induce another prior. And here, it says, well, this prior is actually also Jeffreys prior. OK? So it's essentially telling you that, for this new parametrization, if you take Jeffreys prior, then you actually go back to having exactly something that's of the form's [INAUDIBLE] of determinant of the Fisher information, but this thing with respect to your new parametrization All right. And so why is this true? Well, it's just this change of variable theorem. So it's essentially telling you that, if you call-- let's call p-- well, let's go pi tilde of eta prior over eta. And you have pi of theta as the prior over theta, than since eta is of the form phi of theta, just by change of variable, so that's essentially a probability result. It says that pi tilde of eta is equal to pi of eta times d pi of theta times d theta over d eta and-- sorry, is that the one? Sorry, I'm going to have to write it, because I always forget this. So if I take a function-- OK. So what I want is to check. OK, so I want the function of eta that I can here. And what I know is that this is h of phi of theta. All right? So sorry, eta is phi of theta, right? Yeah. So what I'm going to do is I'm going to do the change of variable, theta is phi inverse of eta. So eta is phi of theta, which means that d eta is equal to d-- well, to phi prime of theta d theta. So when I'm going to write this, I'm going to get integral of h. Actually, let me write this, as I am more comfortable writing this as e with respect to eta of h of eta. OK? So that's just eta according to being drawn from the prior. And I want to write this as the integral of he of eta times some function, right? So this is the integral of h of phi of theta pi of theta d theta. Now, I'm going to do my change of variable. So this is going to be the integral of h of eta. And then pi of phi of-- so theta is phi inverse of eta. And then d theta is phi prime of theta d theta, OK? And so what is pi of phi theta? So this thing is proportional. So we're in, say, dimension 1, so it's proportional of square root of the Fisher information. And the Fisher information, we know, is the expectation of the square of the derivative of the log likelihood, right? So this is square root of the expectation of d over d theta of log of-- well, now, I need the density. Well, let's just call it l of theta. And I want this to be taken at phi inverse of eta squared. And then what I pick up is the-- so I'm going to put everything under the square. So I get phi prime of theta squared d theta. OK? So now, I have the expectation of a square. This does not depend, so this is-- sorry, this is l of theta. This is the expectation of l of theta of an x, right? That's for some variable, and the expectation here is with respect to x. That's just the definition of the Fisher information. So now I'm going to squeeze this guy into the expectation. It does not depend on x. It just acts as a constant. And so what I have now is that this is actually proportional to the integral of h eta times the square root of the expectation with respect to x of what? Well, here, I have d over d theta of log of theta. And here, this guy is really d eta over d theta, right? Agree? So now, what I'm really left by-- so I get d over d theta times d-- sorry, times d theta over d eta. so that's just d over d eta of log of eta x. And then this guy is now becoming d eta, right? OK, so this was a mess. This is a complete mess, because I actually want to use phi. I should not actually introduce phi at all. I should just talk about d eta over d theta type of things. And then that would actually make my life so much easier. OK. I'm not going to spend more time on this. This is really just the idea, right? You have square root of a square in there. And then, when you do your change of variable, you just pick up a square. You just pick up something in here. And so you just move this thing in there. You get a square. It goes inside the square. And so your derivative of the log likelihood with respect to theta becomes a derivative of the log likelihood with respect to eta. And that's the only thing that's happening here. I'm just being super sloppy, for some reason. OK. And then, of course, now, what you're left with is that this is really just proportional. Well, this is actually equal. Everything is proportional, but this is equal to the Fisher information tilde with respect to eta now. Right? You're doing this with respect to eta. And so that's your new prior with respect to eta. OK. So one thing that you want to do, once you have-- so remember, when you actually compute your posterior rate, rather than having-- so you start with a prior, and you have some observations, let's say, x1 to xn. When you do Bayesian inference, rather than spitting out just some theta hat, which is an estimator for theta, you actually spit out an entire posterior distribution-- pi of theta, given x1 xn. OK? So there's an entire distribution on the [INAUDIBLE] theta. And you can actually use this to perform inference, rather than just having one number. OK? And so you could actually build confidence regions from this thing. OK. And so a Bayesian confidence interval-- so if your set of parameters is included in the real line, then you can actually-- it's not even guaranteed to be to be an interval. So let me call it a confidence region, so a Bayesian confidence region, OK? So it's just a random subspace. So let's call it r, is included in theta. And when you have the deterministic one, we had a definition, which was with respect to the randomness of the data, right? That's how you actually had a random subset. So you had a random confidence interval. Here, it's actually conditioned on the data, but with respect to the randomness that you actually get from your posterior distribution. OK? So such that the probability that your theta belongs to this confidence region, given x1 xn is, say, at least 1 minus alpha. Let's just take it equal to 1 minus alpha. OK so that's a confidence region at level 1 minus alpha. OK, so that's one way. So why would you actually-- when I actually implement Bayesian inference, I'm actually spitting out that entire distribution. I need to summarize this thing to communicate it, right? I cannot just say this is this entire function. I want to know where are the regions of high probability, where my perimeter is supposed to be? And so here, when I have this thing, what I actually want to have is something that says, well, I want to summarize this thing into some subset of the real line, in which I'm sure that the area under the curve, here, of my posterior is actually 1 minus alpha. And there's many ways to do this, right? So one way to do this is to look at level sets. And so rather than actually-- so let's say my posterior looks like this. I know, for example, if I have a Gaussian distribution, I can actually take my posterior to be-- my posterior is actually going to be Gaussian. And what I can do is to try to cut it here on the y-axis so that now, the area under the curve, when I cut here, is actually 1 minus alpha. OK, so I have some threshold tau. If tau goes to plus infinity, then I'm going to have that this area under the curve here is going to-- AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Well, no. So the area under the curve, when tau is going to plus infinity, think of the small, the when tau is just right here. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: So this is actually going to 0, right? And so I start here. And then I start going down and down and down and down, until I actually get something which is going down to 1 plus alpha. And if tau is going down to 0, then my area under the curve is going to-- if tau is here, I'm cutting nowhere. And so I'm getting 1, right? Agree? Think of, when tau is very close to 0, I'm cutting [? s ?] s very far here. And so I'm getting some area under the curve, which is almost everything. And so it's going to 1-- as tau going down to 0. Yeah? AUDIENCE: Does this only work for [INAUDIBLE] PHILIPPE RIGOLLET: No, it does not. I mean-- so this is a picture. So those two things work for all of them, right? But when you have a [? bimodal, ?] actually, this is actually when things start to become interesting, right? So when we built a frequentist confidence interval, it was always of the form x bar plus or minus something. But now, if I start to have a posterior that looks like this, what I'm going to start cutting off, I'm going to have two-- I mean, my confidence region is going to be the union of those two things, right? And it really reflects the fact that there is this bimodal thing. It's going to say, well, with hyperbole, I'm actually going to be either here or here. Now, the meaning here of a Bayesian confidence region and the confidence interval are completely distinct notions, right? And I'm going to work out on example with you so that we can actually see that sometimes-- I mean, both of them, actually you can come up with some crazy paradoxes. So since we don't have that much time, I will actually talk to you about why, in some instances, it's actually a good idea to think of Bayesian confidence intervals rather than frequentist ones. So before we go into more details about what those Bayesian confidence intervals are, let's remind ourselves what does it mean to have a frequentist confidence interval? Right? OK. So when I have a frequentist confidence interval, let's say something like x bar n to minus 1.96 sigma over root n and x bar n plus 1.96 sigma over root n, so that's the confidence interval that you get for the mean of some Gaussian with known variants to be equal to sigma square, OK. So what we know is that the meaning of this is the probability that theta belongs to this is equal to 95%, right? And this, more generally, you can think of being q alpha over 2. And what you're going to get is 1 minus alpha here, OK? So what does it mean here? Well, it looks very much like what we have here, except that we're not conditioning on x1 xn. And we should not. Because there was a question like that in the midterm-- if I condition on x1 xn, this probability is either 0 or 1. OK? Because once I condition-- so here, this probability, actually, here is with respect to the randomness in x1 xn. So if I condition-- so let's build this thing, r freq, for frequentist. Well, given x1 xn-- and actually, I don't need to know x1 xn really. What I need to know is what xn bar is. Well, this thing now is what? It's 1, if theta is in r, and it's 0, if theta is not in r, right? That's all there is. This is a deterministic confidence interval, once I condition x1 xn. So I have a number. The average is maybe 3. And so I get 3. Either theta is between 3 minus 0.5 or in 3 plus 0.5, or it's not. And so there's basically-- I mean, I write it as probability, but it's really not a probalistic statement. It's either it's true or not. Agreed? So what does it mean to have a frequentist confidence interval? It means that if I were-- and here, where the word frequentist comes from, it says that if I repeat this experiment over and over, meaning that on Monday, I collect a sample of size n, and I build a confidence interval, and then on Tuesday, I collect another sample of size n, and I build a confidence interval, and on Wednesday, I do this again and again, what's going to happen is the following. I'm going to have my true theta that lives here. And then on Monday, this is the confidence interval that I build. OK, so this is the real line. The true theta is here, and this is the confidence interval I build on Monday. All right? So x bar was here, and this is my confidence interval. On Tuesday, I build this confidence interval maybe. x bar was closer to theta, but smaller. But then on Wednesday, I build this confidence interval. I'm not here. It's not in there. And that's this case. Right? It happens that it's just not in there. And then on Thursday, I build another one. I almost miss it, but I'm in there, et cetera. Maybe here, Here, I miss again. And so what it means to have a confidence interval-- so what does it mean to have a confidence interval at 95%? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, so it means that if I repeat this the frequency of times-- hence, the word frequentist-- at which I'm actually going to overlap that, I'm actually going to contain theta, should be 95%. That's what frequentist means. So it's just a matter of trusting that. So on one given thing, one given realization of your data, it's not telling you anything. [INAUDIBLE] it's there or not. So it's not really something that's actually something that assesses the confidence of your decision, such as data is in there or not. It's something that assesses the confidence you have in the method that you're using. If you were you repeat it over and again, it'd be the same thing. It would be 95% of the time correct, right? So for example, we know that we could build a test. So it's pretty clear that you can actually build a test for whether theta is equal to theta naught or not equal to theta naught, by just checking whether theta naught is in a confidence interval or not. And what it means is that, if you actually are doing those tests at 5%, that means that 5% of the time, if you do this over and again, 5% of the time you're going to be wrong. I mentioned my wife does market research. And she does maybe, I don't know, 100,000 tests a year. And if they do all of them at 1%, then it means that 1% of the time, which is a lot of time, right? When you do 100,000 a year, it's 1,000 of them are actually wrong. OK, I mean, she's actually hedging against the fact that 1% of them that are going to be wrong. That's 1,000 of them that are going to be wrong. Just like, if you do this 100,000 times at 95%, 5,000 of those guys are actually not going to be the correct ones. OK? So I mean, it's kind of scary. But that's the way it is. So that's with the frequentist interpretation of this is. Now, as I mentioned, when we started this Bayesian chapter, I said, Bayesian statistics converge to-- I mean, Bayesian decisions and Bayesian methods converge to frequentist methods. When the sample size is large enough, they lead to the same decisions. And in general, they need not be the same, but they tend to actually, when the sample size is large enough, to have the same behavior. Think about, for example, the posterior that you have when you have in the Gaussian case, right? We said that, in the Gaussian case, what you're going to see is that it's as if you had an extra observation which was essentially given by your prior. OK? And now, what's going to happen is that, when this just one observation among n plus 1, it's really going to be totally drawn, and you won't see it when the sample size grows larger. So Bayesian methods are particularly useful when you have a small sample size. And when you have a small sample size, the effect of the prior is going to be bigger. But most importantly, you're not going to have to repeat this thing over and again. And you're going to have a meaning. You're going to have to have something that has a meaning for this particular data set that you have. When I said that the probability that theta belongs to r-- and here, I'm going to specify the fact that it's a Bayesian confidence region, like this one-- this is actually conditionally on the data that you've collected. It says, given this data, given the points that you have-- just put in some numbers, if you want, in there-- it's actually telling you the probability that theta belongs to this Bayesian thing, to this Bayesian confidence region. Here, since I have conditioned on x1 xn, this probability is really just with respect to theta drawn from the prior, right? And so now, it has a slightly different meaning. It's just telling you that when-- it's really making a statement about where the regions of hyperability of your posterior are. Now, why is that useful? Well, there's actually an interesting story that goes behind Bayesian methods. Anybody knows the story of the USS I think it's Scorpion? Do you know the story? So that was an American vessel that disappeared. I think it was close to Bermuda or something. But you can tell the story of the Malaysian Airlines, except that I don't think it's such a successful story. But the idea was essentially, we're trying to find where this thing happened. And of course, this is a one-time thing. You need something that works once. You need something that works for this particular vessel. And you don't care, if you go to the Navy, and you tell them, well, here's a method. And for 95 out of 100 vessels that you're going to lose, we're going to be able to find it. And they want this to work for this particular one. And so they were looking, and they were diving in different places. And suddenly, they brought in this guy. I forget his name. I mean, there's a whole story about this on Wikipedia. And he started collecting the data that they had from different dives and maybe from currents. And he started to put everything in. And he said, OK, what is the posterior distribution of the location of the vessel, given all the things that I've seen? And what have you seen? Well, you've seen that it's not here, it's not there, and it's not there. And you've also seen that the currents were going that way, and the winds were going that way. And you can actually put some modeling traits to understand this. Now, given this, for this particular data that you have, you can actually think of having a two-dimensional density that tells you where it's more likely that the vessel is. And where are you going to be looking for? Well, if it's a multimodal distribution, you're just going to go to the highest mode first, because that's where it's the most likely to be. And maybe it's not there, so you're just going to update your posterior, based on the fact that it's not there, and do it again. And actually, after two dives, I think, he actually found the thing. And that's exactly where Bayesian statistics start to kick in. Because you put a lot of knowledge into your model, but you also can actually factor in a bunch of information, right? The model, he had to build a model that was actually taking into account and currents, and when. And what you can have as a guarantee is that, when you talk about the probability that this vessel is in this location, given what you've observed in the past, it actually has some sense. Whereas, if you were to use a frequentist approach, then there's no probability. Either it's underneath this position or it's not, right? So that's actually where it start to make sense. And so you can actually build this. And there's actually a lot of methods that are based on, for search, that are based on Bayesian methods. I think, for example, the Higgs boson was based on a lot of Bayesian methods, because this is something you need to find [INAUDIBLE],, right? I mean, there was a lot of prior that has to be built in. OK. So now, you build this confidence interval. And the nicest way to do it is to use level sets. But again, just like for Gaussians, I mean, if I had, even in the Gaussian case, I decided to go at x bar plus or minus something, but I could go at something that's completely asymmetric. So what's happening is that here, this method guarantees that you're going to have the narrowest possible confidence intervals. That's essentially what it's telling you, OK? Because every time I'm choosing a point, starting from here, I'm actually putting as much area under the curve as I can. All right. So those are called Bayesian confidence [? interval. ?] Oh yeah, and I promised you that we're going to work on some example that actually gives a meaning to what I just told you, with actual numbers. So this is something that's taken from Wasserman's book. And also, it's coming from a paper, from a stats paper, from [? Wolpert ?] and I don't know who, from the '80s. And essentially, this is how it works. So assume that you have n equals 2 observations. And you have y1, so those observations are y1-- no, sorry, let's call them x1, which is theta, plus epsilon 1 and x2, which is theta plus epsilon 2, where epsilon 1 and epsilon 2 are iid. And the probability that epsilon i is equal to plus 1 is equal to the probability that epsilon i is equal to minus 1 is equal to 1/2. OK, so it's just the uniform sign plus minus 1, OK? Now, let's think about so you're trying to do some inference on theta. Maybe you actually want to find some inference on theta that's actually based on-- and that's based only on the x1 and x2. OK? So I'm going to actually build a confidence interval. But what I really want to build is a-- but let's start thinking about how I would find an estimator for those two things. Well, what values am I going to be getting, right? So I'm going to get either theta plus 1 or theta minus 1. And actually, I can get basically four different observations, right? Sorry, four different pairs of observations-- plus plus theta minus 1. Agreed? Those are the four possible observations that I can get. Agreed? Either they're both equal to plus 1, both equal to minus 1, or one of the two is equal to plus 1, the other one to minus 1, or the epsilons. OK. So those are the four observations I can get. So in particular, if they take the same value, and you know it's either theta plus 1 or theta minus 1, and if they take a different value, I know one of them is theta plus 1, and one is actually theta minus 1. So in particular, if I take the average of those two guys, when they take different values, I know I'm actually getting theta right. So let's build a confidence region. OK, so I'm actually going to take a confidence region, which is just a singleton. And I'm going to say the following. Well, if x1 is equal to x2, I'm just going to take x1 minus 1, OK? So I'm just saying, well, I'm never going to able to resolve whether it's plus 1 or minus 1 that actually gives me the best one, so I'm just going to take a dive and say, well, it's just plus 1. OK? And then, if they're different, then here, I can do much better. I'm going to actually just think the average. OK? Now, what I claim is that this is a confidence region-- and by default, when I don't mention it, this is a frequentist confidence region-- at level 75%. OK? So let's just check that. To check that this is correct, I need to check that the probability under the realization of x1 and x2, that theta belongs, is one of those two guys, is actually equal to 0.75. Yes? AUDIENCE: What are the [INAUDIBLE] PHILIPPE RIGOLLET: Well, it's just the frequentist confidence interval that does not need to be an interval. Actually, in this case, it's going to be an interval. But that's just what it means. Yeah, region for Bayesian was just because-- I mean, the confidence intervals, when we're frequentist, we tend to make them intervals, because we want-- but when you're Bayesian, and you're doing this level set thing, you cannot really guarantee, unless its [INAUDIBLE] is going to be an interval. So region is just a way to not have to say interval, in case it's not. OK. So I have this thing. So what I need to check is the probability that theta is in one of those two things, right? So what I need to find is the probability that theta is an [INAUDIBLE] Well, x1 minus 1 and x1 is not equal to x2. And those are disjoint events, so it's plus the probability that theta is in x1 plus x2 over 2 and x1-- sorry, that's equal. That's different. OK. And OK, just before we actually finish the computation, why do I have 75%? 75% is 3/4. So it means that we have four cases. And essentially, I did not account for one case. And it's true. I did not account for this case, when the both of the epsilon i's are equal to minus 1. Right? So this is essentially the one I'm not going to be able to account for. And so we'll see that in a second. So in this case, we know that everything goes great. Right? So in this case, this is-- OK. Well, let's just start from the first line. So the first line is the probability that theta is equal to x1 minus 1 and those two are equal. So this is the probability that theta is equal to-- well, this is theta plus epsilon 1 minus 1. And epsilon 1 is equal to epsilon 2, right? Because I can remove the theta from here, and I can actually remove the theta from here, so that this guy here is just epsilon 1 is equal to 1. So when I intersect with this guy, it's actually the same thing as epsilon 1 is equal to 1, as well-- episilon 2 is equal to 1, as well, OK? So this first thing is actually equal to the probability that epsilon 1 is equal to 1 and epsilon 2 is equal to 1, which is equal to what? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, 1/4, right? So that's just the first case over there. They're independent. Now, I still need to do the second one. So this case is what? Well, when those things are equal, x1 plus x2 over 2 is what? Well, I get theta plus theta over 2. So that's just equal to the probability that epsilon 1 plus epsilon 2 over 2 is equal to 0 and epsilon 1 is different from epsilon 2. Agreed? I just removed the thetas from these equations, because I can. They're just on both sides every time. OK. And so that means what? That means that the second part-- so this thing is actually equal to 1/4 plus the probability that epsilon 1 and epsilon 2 over 2 is equal to 0. I can remove the 2. So this is just the probability that one is 1, and the other one is minus 1, right? So that's equal to the probability that epsilon 1 is equal to 1 and epsilon 2 is equal to minus 1 plus the probability that epsilon 1 is equal to minus 1 and epsilon 2 is equal to plus 1, OK? Because they're disjoint events. So I can break them into the sum of the two. And each of those guys is also one of the atomic part of it. It's one of the basic things. And so each of those guys has probability 1/4. And so here, we can really see that we accounted for everything, except for the case when epsilon 1 was equal to minus 1, and epsilon 2 was equal to minus 1. So this is 1/4. This is 1/4. So the whole thing is equal to 3/4. So now, what we have is that the probability that epsilon 1 is in-- so the probability that data belongs to this confidence region is equal to 3/4. And that's very nice. But the thing is some people are sort of-- I mean, it's not super nice to be able to see this, because, in a way, I know that, if I observe x1 and x2 that are different, I know for sure that theta, that I actually got the right theta, right? That this confidence interval is actually happening with probability 1. And the problem is that I do not know-- I cannot make this precise with the notion of frequentist confidence intervals. OK? Because frequentist confidence intervals have to account for the fact that, in the future, it might not be the case that x1 and x2 are different. So Bayesian confidence regions, by definition-- well, they're all gone-- but they are conditioned on the data that I have. And so that's what I want. I want to be able to make this statement conditionally and the data that I have. OK. So if I want to be able to make this statement, if I want to build a Bayesian confidence region, I'm going to have to put a prior on theta. So without loss of generality-- I mean, maybe with-- but let's assume that pi is a prior on theta. And let's assume that pi of j is strictly positive for all integers j equal, say, 0-- well, actually, for all j in the integers, positive or negative. OK. So that's a pretty weak assumption on my prior. I'm just assuming that theta is some integer. And now, let's build our Bayesian confidence region. Well, if I want to build a Bayesian confidence region, I need to understand what my posterior is going to be. OK? And if I want to understand what my posterior is going to be, I actually need to build a likelihood, right? So we know that it's the product of the likelihood and of the prior divided by-- OK. So what is my likelihood? So my likelihood is the probability of x1 x2, given theta. Right? That's what the likelihood should be. And now let's say that actually, just to make things a little simpler, let us assume that x1 is equal to, I don't know, 5, and x2 is equal to 7. OK? So I'm not going to take the case where they're actually equal to each other, because I know that, in this case, x1 and x2 are different. I know I'm going to actually nail exactly what theta is, by looking at the average of those guys, right? Here, it must be that theta is equal to 6. So what I want is to compute the likelihood at 5 and 7, OK? And what is this likelihood? Well, if theta is equal to 6, that's just the probability that I will observe 5 and 7, right? So what is the probability I observe 5 and 7? Yeah? 1? AUDIENCE: 1/4. PHILIPPE RIGOLLET: That's 1/4, right? As the probability, I have minus 1 for the first epsilon 1, right? So this is infinity 6. This is the probability that epsilon 1 is equal to minus 1, and epsilon 2 is equal to plus 1, which is equal to 1/4. So this probability is 1/4. If theta is different from 6, what is this probability? So if theta is different from 6, since we know that we've only loaded the integers-- so if theta has to be another integer, what is the probability that I see 5 and 7? AUDIENCE: 0. PHILIPPE RIGOLLET: 0. So that's my likelihood. And if I want to know what my posterior is, well, it's just pi of theta times p of 5/6, given theta, divided by the sum over all T's, say, in Z. Right? So now, I just need to normalize this thing. So of pi of T, p of 4/6, given T. Agreed? That's just the definition of the posterior. But when I sum these guys, there's only one that counts, because, for those things, we know that this is actually equal to 0 for every T, except for when T is equal to 6. So this entire sum here is actually equal to pi of 6 times p of 5/6-- sorry, 5/7, of 5/7, given that theta is equal to 6, which we know is equal to 1/4. And I did not tell you what pi of 6 was. But it's the same thing here. The posterior for any theta that's not 6 is actually going to be-- this guy's going to be equal to 0. So I really don't care what this guy is. So what it means is that my posterior becomes what? It becomes the posterior pi of theta, given 5 and 7 is equal to-- well, when theta is not equal to 6, this is actually 0. So regardless of what I do here, I get something which is 0. And if theta is equal to 6, what I get is pi of 6 times p of 5/7, given 6, which I've just computed here, which is 1/4 divided by pi of 6 times 1/4. So it's the ratio of two things that are identical. So I get 1. So now, my posterior tells me that, given that I observe 5 and 7, theta has to be 1 with probability-- has to be 6 with probability 1. So now, I say that this thing here-- so now, this is not something that actually makes sense when I talk about frequentist confidence intervals. They don't really make sense, to talk about confidence intervals, given something. And so now, given that I observe 5 and 7, I know that the probability of theta is equal to 1. And in this sense, the Bayesian confidence interval is actually more meaningful. So one thing I want to actually say about this Bayesian confidence interval is that it's-- I mean, here, it's equal to the value 1, right? So it really encompasses the thing that we want. But the fact that we actually computed it using the Bayesian posterior and the Bayesian rule did not really matter for this argument. All I just said was that it had a prior. But just what I want to illustrate is the fact that we can actually give a meaning to the probability that theta is equal to 6, given that I see 5 and 7. Whereas, we cannot really in the other cases. And we don't have to be particularly precise in the prior and theta to be able to give theta this-- to give this meaning. OK? All right. So now, as I said, I think the main power of Bayesian inference is that it spits out the posterior distribution, and not just the single number, like frequentists would give you. Then we can say decorate, or theta hat, or point estimate, with maybe some confidence interval. Maybe we can do a bunch of tests. But at the end of the day, we just have, essentially, one number, right? Then maybe we can understand where the fluctuations of this number are in a frequentist setup. but the Bayesian framework is essentially giving you a natural method. And you can interpret it in terms of the probabilities that are associated to the prior. But you can actually also try to make some-- so a Bayesian, if you give me any prior, you're going to actually build an estimator from this prior, maybe from the posterior. And maybe it's going to have some frequentist properties. And that's what's really nice about [? Bayesians, ?] is that you can actually try to give some frequentist properties of Bayesian methods, that are built using Bayesian methodology. But you cannot really go the other way around. If I give you a frequency methodology, how are you going to say something about the fact that there's a prior going on, et cetera? And so this is actually one of the things there's actually some research that's going on for this. They call it Bayesian posterior concentration. And one of the things-- so there's something called the Bernstein-von Mises theorem. And those are a class of theorems, and those are essentially methods that tell you, well, if I actually run a Bayesian method, and I look at the posterior that I get-- it's going to be something like this-- but now, I try to study this in a frequentist point of view, there's actually a true parameter of theta somewhere, the true one. There's no prior for this guy. This is just one fixed number. Is it true that as my sample size is going to go to infinity, then this thing is going to concentrate around theta? And the rate of concentration of this thing, the size of this width, the standard deviation of this thing, is something that should decay maybe like 1 over square root of n, or something like this. And the rate of posterior concentration, when you characterize it, it's called the Bernstein-von Mises theorem. And so people are looking at this in some non-parametric cases. You can do it in pretty much everything we've been doing before. You can do it for non-parametric regression estimation or density estimation. You can do it for, of course-- you can do it for sparse estimation, if you want. OK. So you can actually compute the procedure and-- yeah. And so you can think of it as being just a method somehow. Now, the estimator I'm talking about-- so that's just a general Bayesian posterior concentration. But you can also try to understand what is the property of something that's extracted from this posterior. And one thing that we actually describe was, for example, well, given this guy, maybe it's a good idea to think about what the mean of this thing is, right? So there's going to be some theta hat, which is just the integral of theta pi theta, given x1 xn-- so that's my posterior-- d theta. Right? So that's the posterior mean. That's the expected value with respect to the posterior distribution. And I want to know how does this thing behave, how close it is to a true theta if I actually am in a frequency setup. So that's the posterior mean. But this is not the only thing I can actually spit out, right? This is definitely uniquely defined. If you give me a distribution, I can actually spit out its posterior mean. But I can also think of the posterior median. But now, if this is not continuous, you might have some uncertainty. Maybe the median is not uniquely defined, and so maybe that's not something you use as much. Maybe you can actually talk about the posterior mode. All right, so for example, if you're posterior density looks like this, then maybe you just want to summarize your posterior with this number. So clearly, in this case, it's not such a good idea, because you completely forget about this mode. But maybe that's what you want to do. Maybe you want to focus on the most peak mode. And this is actually called maximum a posteriori. As I said, maybe you want a sample from this posterior distribution. OK, and so in all these cases, these Bayesian estimators will depend on the prior distribution. And the hope is that, as the sample size grows, you won't see that again. OK. So to conclude, let's just do a couple of experiments. So if I look at-- did we do this? Yes. So for example, so let's focus on the posterior mean. And we know-- so remember in experiment one-- [INAUDIBLE] example one, what we had was x1 xn that were [? iid, ?] Bernoulli p, and the prior I put on p was a beta with parameter aa. OK? And if I go back to what we computed, you can actually compute the posterior of this thing. And we know that it's actually going to be-- sorry, that was uniform? Where is-- yeah. So what we get is that the posterior, this thing is actually going to be a beta with parameter a plus the sum, so a plus the number of 1s and a plus the number of 0s. OK? And the beta was just something that looked like-- the density was p to the a minus 1, 1 minus p. OK? So if I want to understand the posterior mean, I need to be able to compute the expectation of a beta, and then maybe plug in a for a plus this guy and minus this guy. OK. So actually, let me do this. OK. So what is the expectation? So what I want is something that looks like the integral between 0 and 1 of p times a minus 1-- sorry, p times p a minus 1, 1 minus p, b minus 1. Do we agree that this-- and then there's a normalizing constant. Let's call it c. OK? So this is what I need to compute. So that's c of a and b. Do we agree that this is the posterior mean with respect to a beta with parameters a and b? Right? I just integrate p against the density. So what does this thing look like? Well, I can actually move this guy in here. And here, I'm going to have a plus 1 minus 1. OK? So the problem is that this thing is actually-- the constant is going to play a big role, right? Because this is essentially equal to c a plus 1b divided by c ab, where ca plus 1b is just the normalizing constant of a beta a plus 1 b. So I need to know the ratio of those two constants. And this is not something-- I mean, this is just a calculus exercise. So in this case, what you get is-- sorry. In this case, you get-- well, OK, so we get essentially a divided by, I think, it's a plus b. Yeah, it's a plus b. So that's this quantity. OK? And when I plug in a to be this guy and b to be this guy, what I get is a plus sum of the xi. And then I get a plus this guy, a plus n minus this guy. So those two guys go away, and I'm left with 2a plus n, which does not work. No, that actually works. And so now what I do, I can actually divide and get this thing, over there. OK. So what you can see, the reason why this thing has been divided is that you can really see that, as n goes to infinity, then this thing behaves like xn bar, which is our frequentist estimator. The effect of a is actually going away. The effect of the prior, which is completely captured by a, is going away as n goes to infinity. Is there any question? You guys have a question. What is it? Do you have a question? AUDIENCE: Yeah, on the board, is that divided by some [INAUDIBLE] stuff? PHILIPPE RIGOLLET: Is that divided by what? AUDIENCE: That a over a plus b, and then you just expanded-- PHILIPPE RIGOLLET: Oh yeah, yeah, then I said that this is equal to this, right. So that's for a becomes a plus sum of the xi's, and b becomes a plus n minus sum of the xi's. OK. So that's just for the posterior one. AUDIENCE: What's [INAUDIBLE] PHILIPPE RIGOLLET: This guy? AUDIENCE: Yeah. PHILIPPE RIGOLLET: 2a. AUDIENCE: 2a. Oh, OK. PHILIPPE RIGOLLET: Right. So I get a plus a plus n. And then those two guys cancel. OK? And that's what you have here. So for a is equal to 1/2-- and I claim that this is Jeffreys prior. Because remember, Jeffreys was [INAUDIBLE] was square root and was proportional to the square root of p1 minus p, which I can write as p to the 1/2, 1 minus p to the 1/2. So it's just the case a is equal to 1/2. OK. So if I use Jeffreys prior, I just plug in a equals to 1/2, and this is what I get. OK? So those things are going to have an impact again when n is moderately large. For large n, those things, whether you take Jeffreys prior or you take whatever a you prefer, it's going to have no impact whatsoever. But n is of the order of 10 maybe, then you're going to start to see some impact, depending on what a you want to pick. OK. And then in the second example, well, here we actually computed the posterior to be this guy. Well, here, I can just read off what the expectation is, right? I mean, I don't have to actually compute the expectation of a Gaussian. It's just that xn bar. And so in this case, there's actually no-- I mean, when I have a non-informative prior for a Gaussian, then I have basically xn in bar. As you can see, actually, this is an interesting example. When I actually look at the posterior, it's not something that cost me a lot to communicate to you, right? There's one symbol here, one symbol here, and one symbol here. I tell you the posterior is a Gaussian with mean xn bar and variance 1/n. When I actually turn that into a poster mean, I'm dropping all this information. I'm just giving you the first parameter. So you can see there's actually much more information in the posterior than there is in the posterior mean. The posterior mean is just a point. It's not telling me how confident I am in this point. And this thing is actually very interesting. OK. So you can talk about the posterior variance that's associated to it, right? You can talk about, as an output, you could give the posterior mean and posterior variance. And those things are actually interesting. All right. So I think this is it. So as I said, in general, just like in this case, the impact of the prior is being washed away as the sample size goes to infinity. Just well, like here, there's no impact of the prior. It was an noninvasive one. But if you actually had an informative one, [? CF ?] homework-- yeah? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, so [? CF ?] homework, you would actually see an impact of the prior, which, again, would be washed away as your sample size increases. Here, it goes away. You just get xn bar over 1. And actually, in these cases, you see that the posterior distribution converges to-- sorry, the Bayesian estimator is asymptotically normal. This is different from the distribution of the posterior, right? This is just the posterior mean, which happens to be asymptotically normal. But the posterior may not have a-- I mean, here, the posterior is a beta, right? I mean, it's not normal. OK, so there's different-- those things are two different things. Your question? AUDIENCE: What was the prior [INAUDIBLE] PHILIPPE RIGOLLET: All 1, right? That was the improper prior. AUDIENCE: OK. And so that would give you the same thing as [INAUDIBLE],, not just the proportion. PHILIPPE RIGOLLET: Well, I mean, yeah. So it's essentially telling you that-- so we said that, when you have a non-informative prior, essentially, the maximum likelihood is the maximum a posteriori, right? But in this case, there's so much symmetry, that it just so happens that the maximum in this thing is completely symmetric around its maximum. So it means that the expectation is equal to the maximum, to [INAUDIBLE] max. Yeah? AUDIENCE: I read somewhere that one of the issues with Bayesian methods is that we choose the wrong prior, and it could mess up your results. PHILIPPE RIGOLLET: Yeah, but hence, do not pick the wrong prior. I mean, of course, it would. I mean, it would mess up your res-- of course. I mean, you're putting extra information. But you could say the same thing by saying, well, the issue with frequentist method is that, if you mess up the choice of your likelihood, then it's going to mess up your output. So here, you just have two chances of messing it up, right? You have the-- well, it's gone. So you have the product of the likelihood and the prior, and you have one more chance to-- but it's true, if you assume that the model is right, then, of course, finding the wrong prior could completely mess up things if your prior, for example, has no support on the true parameter. But if your prior has a positive weight on the true parameter as n goes to infinity-- I mean, OK, I cannot speak for all counterexamples in the world. But I'm sure, under minor technical conditions, you can guarantee that your posterior mean is going to converge to what you need it to converge to. Any other question? All right. So I think this closes the more traditional mathematical-- not mathematical, but traditional statistics part of this class. And from here on, we'll talk about more multivariate statistics, starting with principal component analysis. So that's more like when you have multiple data. We started, in a way, to talk about multivariate statistics when we talked about multivariate regression. But we'll move on to principal component analysis. I'll talk a bit about multiple testing. I haven't made my mind yet about what we'll talk really in December. But I want to make sure that you have a taste and a flavor of what is being interesting in statistics these days, especially as you go towards more [INAUDIBLE] learning type of questions, where really, the focus is on prediction rather than the modeling itself. We'll talk about logistic regression, as well, for example, which is generalized linear models, which is just the generalization in the case that y does not take value in the whole real line, maybe 0,1, for example, for regression. All right. Thanks.
MIT_18650_Statistics_for_Applications_Fall_2016
8_Parametric_Hypothesis_Testing_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: We're talking about tests. And to be fair, we spend most of our time talking about new jargon that we're using. The main goal is to take a binary decision, yes and no. So just so that we're clear and we make sure that we all speak the same language, let me just remind you what the key words are for tests. So the first thing is that we split theta in theta 0 and theta 1. Both are included in theta, and they are disjoint. So I have my set of possible parameters. And then I have theta 0 is here, theta 1 is here. And there might be something that I leave out. And so what we're doing is, we have two hypotheses. So here's our hypothesis testing problem. And it's h0 theta belongs to theta 0 versus h1 theta belongs to theta 1. This guy was called the null, and this guy was called the alternative. And why we give them special names is because we saw that they have an asymmetric role. The null represents the status quo, and data is here to bring evidence against this guy. And we can really never conclude that h0 is true because all we could conclude is that h1 is not true, or may not be true. So that was the first thing. The second thing was the hypothesis. The third thing is, what is a test? Well, psi, it's a statistic, and it takes the data, and it maps it into 0 or 1. And I didn't really mention it, but there's some things such as called randomized tests, which is, well, if I cannot really make a decision, I might as well flip a coin. That tends to be biased, but that's really-- I mean, think about it in practice. You probably don't want to make decisions based on flipping a coin. And so what people typically do-- this is happening, typically, at one specific value. So rather than flipping a coin for this very specific value, what people typically do is they say, OK, I'm going to side with h0 because that's the most conservative choice I can make. So in a way, they think of flipping this coin, but always falling on heads, say. So associated to this test was something called, well, the rejection region r psi, which is just the set of data x1 xn such that psi of x1 xn is equal to 1. So that means we rejected h0 when the test is 1. And those are the set of data points that actually are going to lead me to reject the test. And then the things that we're actually, slightly, a little more important and really peculiar to test, specific to tests, were the type I and type II error. So the type I error arises when-- so type I error is when you reject, whereas h0 is correct. And the type II error is the opposite, so it's failed to reject, whereas h1 is correct-- h is correct, yeah. So those are the two types of errors you can make. And we quantified their probability of type I error. So alpha psi is the probability-- so that's the probability of type I error. So psi is just the probability for theta that psi rejects and that's defined for theta and theta 0, so for different values of theta 0. So h0 being correct means there exists a theta in theta 0 for which that actually is the right distribution. So for different values of theta, I might make different errors. So if you think, for example, about the coin example, I'm testing if the coin is biased towards heads or biased towards tails. So if I'm testing whether p is larger than 1/2 or less than 1/2, then when the true p-- let's say our h0 is larger than 1/2. When p is equal to 1, it's actually very difficult for me to make a mistake, because I only see heads. So when p is getting closer to 1/2, I'm going to start making more and more probability of error. And so the type II error-- so that's the probability of type II-- is denoted by beta psi. And it's the function, well, that does the opposite and, this time, is defined for theta in theta 1. And finally, we define something called the power, pi of psi. And this time, this is actually a number. And so this number is equal to the maximum over theta n theta 0. I mean, that could be a supremum, but think of it as being a maximum of p theta of psi is equal-- sorry, that's n0, right? Give me one sec. No, sorry, that's the min. So this is not making a mistake. Theta 0 is in theta 2 So if theta is in theta 1 and I conclude 1, so this is a good thing. I want this number to be large. And I'm looking at the worst house-- what is the smallest value this number can be? So what I want to show you a little bit is a picture. So now I'm going to take theta, and think of it as being a p. So I'm going to take p for some variable in the experiment. So p can range between 0 and 1, that's for sure. And what I'm going to try to test is whether p is less than 1/2 or larger than 1/2. So this is going to be, let's say, theta 0. And this guy here is theta 1. Just trying to give you a picture of what those guys are. So I have my y-axis, and now I'm going to start drawing number. All these things-- this function, this function, and this number-- are all numbers between 0 and 1. So now I'm claiming that-- so when I move from left to right, what is my probability of rejecting going to do? So what I'm going to plot is the probability under theta. The first thing I want to plot is the probability under theta that psi is equal to 1. And let's say psi-- think of psi as being just this indicator that square root on n xn bar minus p over square root xn bar 1 minus xn bar is larger than some constant c for a probability chosen c. So what we choose is that c is in such a way that, at 1/2, when we're testing for 1/2, what we wanted was this number to be equal to alpha, basically. So we fix this alpha number so that this guy-- so if I want alpha of psi of theta less than alpha given in advanced-- so think of it as being equal to, say, 5%. So I'm fixing this number, and I want this to be controlled for all theta and theta 0. So if you're going to give me this budget, well, I'm actually going to make it equal where I can. If you're telling me you can make it equal to alpha, we know that if I increase my type I error, I'm going to decrease my type II error. If I start putting everyone in jail or if I start letting everyone go free, that's what we were discussing last time. So since we have this trade-off and you're giving me a budget for one guy, I'm just going to max it out. And where am I going to max it out? Exactly at 1/2 at the boundary. So this is going to be 5%. So what I know is that since alpha of theta is less than alpha for all theta in theta 0-- sorry, that's for theta 0, that's where alpha is defined. So for theta and theta 0, I knew that my function is going to look like this. It's going to be somewhere in this rectangle. Everybody agrees? So this function for this guy is going to look like this. When I'm at 0, when p is equal to 0, which means I only observe 0's, then I know that p is going to be 0, and I will certainly not conclude that p is equal to 1. This test will never conclude that p is equal to 1-- that p is larger than 1/2, just because xn bar is going to be equal to 0. Well, this is actually not well-defined, so maybe I need to do something-- put it equal to 0 if xn bar is equal to 0. So I guess, basically, I get something which is negative, and so it's never going to be larger than what I want. And so here, I'm actually starting at 0. So now, this is this function here that increases-- I mean, it should increase smoothly. This function here is alpha psi of theta-- or alpha psi of p, let's say, because we're talking about p. Then it reaches alpha here. Now, when I go on the other side, I'm actually looking at beta. When I'm on theta 1, the function that matters is the probability of type II error, which is beta of psi. And this beta of psi is actually going to increase. So beta of psi is what? Well, beta of psi should also-- sorry, that's the probability of being equal to alpha. So what I'm going to do is I'm going to look at the probability of rejecting. So let me draw this functional all the way. It's going to look like this. Now here, if I look at this function here or here, this is the probability under theta that psi is equal to 1. And we just said that, in this region, this function is called alpha of psi. In that region, it's not called alpha of psi. It's not called anything. It's just the probability of rejection. So it's not any error, it's actually what you should be doing. What we're looking at in this region is 1 minus this guy. We're looking at the probability of not rejecting. So I need to actually, basically, look at the 1 minus this thing, which here is going to be 95%. So I'm going to do 95%. And this is my probability. Ability And I'm just basically drawing the symmetric of this guy. So this here is the probability under theta that psi is equal to 0, which is 1 minus p theta that psi is equal to 1. So it's just 1 minus the wide curve. And it's actually, by definition, equal to beta of psi of theta. Now, where do I read pi psi? What is pi psi on this picture? Is pi psi a number or a function? AUDIENCE: Number. PHILIPPE RIGOLLET: It's a number, right? It's the minimum of a function. What is this function? It's the probability under theta that theta is equal to 1. I drew this entire function for between theta 0 and theta 1. I drew-- this is this entire white curve. This is this probability. Now I'm saying, look at the smallest value this probability can take on the set theta 1. What is this? This guy. This is where my pi-- this thing here is pi psi, and so it's equal to 5%. So that's for this particular test, because this test has a continuous curve for this psi. And so if I want to make sure that I'm at 5% when I come to the right of the theta 0, if it touches theta 1, then I'd better have 5% on the other side if the function is continuous. So basically, if this function is increasing, which will be the case for most tests, and continuous, then what's going to happen is that the level of the test, which is alpha, is actually going to be equal to the power of the test. Now, there's something I didn't mention, and I'm just mentioning it passing by. Here, I define the power itself. This function, this entire white curve here, is actually called the power function-- this thing. That's the entire white curve. And what you could have is tests that have the entire curve which is dominated by another test. So here, if I look at this test-- and let's assume I can build another test that has this curve. Let's say it's the same here, but then here, it looks like this. What is the power of this test? AUDIENCE: It's the same. PHILIPPE RIGOLLET: It's the same. It's 5%, because this point touches here exactly at the same point. However, for any other value than the worst possible, this guy is doing better than this guy. Can you see that? Having a curve higher on the right-hand side is a good thing because it means that you tend to reject more when you're actually in h1. So this guy is definitely better than this guy. And so what we say, in this case, is that the test with the dashed line is uniformly more powerful than the other tests. But we're not going to go into those details because, basically, all the tests that we will describe are already the most powerful ones. In particular, this guy is-- there's no such thing. All the other guys you can come up with are going to actually be below. So we saw a couple tests, then we saw how to pick this threshold, and we defined those two things. AUDIENCE: Question. PHILIPPE RIGOLLET: Yes? AUDIENCE: But in that case, the dashed line, if it were also higher in the region of theta 0, do you still consider it better? PHILIPPE RIGOLLET: Yeah. AUDIENCE: OK. PHILIPPE RIGOLLET: Because you're given this budget of 5%. So in this paradigm where you're given the-- actually, if the dashed line was this dashed line, I would still be happy. I mean, I don't care what this thing does here, as long as it's below 5%. But here, I'm going to try to discover. Think about, again, the drug discovery example. You're trying to find-- let's say you're a scientist and you're trying to prove that your drug works. What do you want to see? Well, FDA puts on you this constraint that your probability of type I error should never exceed 5%. You're going to work under this assumption. But what you're going to do is, you're going to try to find a test that will make you find something as often as possible. And so you're going to max this constraint of 5%. And then you're going to try to make this curve, that means-- this is, basically, this number here, for any point here, is the probability that you publish your paper. That's the probability that you can release to market your drug. That's the probability that it works. And so you want this curve to be as high as possible. You want to make sure that if there's evidence in the data that h1 is the truth, you want to squeeze as much of this evidence as possible. And the test that has the highest possible curve is the most powerful one. Now, you have to also understand that having two curves that are on top of each other completely, everywhere, is a rare phenomenon. It's not always the case that there is a test that's uniformly more powerful than any other test. It might be that you have some trade-off, that it might be better here, but then you're losing power here. Maybe it's-- I mean, things like this. Well, actually, maybe it should not go down. But let's say it goes like this, and then, maybe, this guy goes like this. Then you have to, basically, make an educated guess whether you think that the theta you're going to find is here or is here, and then you pick your test. Any other question? Yes? AUDIENCE: Can you explain the green curve again? That's just the type II error? PHILIPPE RIGOLLET: So the green curve is-- exactly. So that's beta psi of theta. So it's really the type II error. And it's defined only here. So here, it's not a definition, it's really I'm just mapping it to this point. So it's defined only here, and it's the probability of type II error. So here, it's pretty large. I'm making it, basically, as large as I could because I'm at the boundary, and that means, at the boundary, since the status quo is h0, I'm always going to go for h0 if I don't have any evidence, which means that what's going to pay is the type II error that's going to basically pay this. Any other question? So let's move on. So did we do this? No, I think we stopped here, right? I didn't cover that part. So as I said, in this paradigm, we're going to actually fix this guy to be something. And this thing is actually called the level of the test. I'm sorry, this is, again, more words. Actually, the good news is that we split it into two lectures. So we have, what is a test? What is a hypothesis? What is the null? What is the alternative? What is the type I error? What is the type II error? And now, I'm telling you there's another thing. So we define the power, which was some sort of a lower bound on the-- or it's 1 minus the upper bound on the type II error, basically. And so it's alternative-- so the power is the smallest probability of rejecting when you're in the null. And it's alternative when you're in theta 1, so that's my power. I looked here, and I looked at the smallest value. And I can look at this side and say, well, what is the largest probability that I make a type I error? Again, this largest probability is the level of the test. So this is alpha equal, by definition, to the maximum for theta in theta 0 of alpha psi of theta. So here, I just put the level itself. As you can see, here, it essentially says that if I'm of level of 5%, I'm also of level 10%, I'm also of level 15%. So here, it's really an upper bound. Whatever you guys want to take, this is what it is. But as we said, if this number is 4.5%, you're losing in your type II error. So if you're allowed to have-- if this maximum here is 4.5% and FDA told you you can go to 5%, you're losing in your type II error. So you actually want to make sure that this is the 5% that's given to you. So the way it works is that you give me the alpha, then I'm going to go back, pick c that depends on alpha here, so that this thing is actually equal to 5%. And so of course, in many instances, we do not know the probability. We do not know how to compute the probability of type I error. This is a maximum value for the probability of type I error. We don't know how to compute it. I mean, it might be a very complicated random variable. Maybe it's a weird binomial. We could compute it, but it would be painful. But we know how to compute is its asymptotic value. Just because of the central limit theorem, convergence and distribution tells me that the probability of type I error is basically going towards the probability that some Gaussian is in some region. And so we're going to compute, not the level itself, but the asymptotic level. And that's basically the limit as n goes to infinity of alpha psi of theta. And then I'm going to make the max here. So how am I going to compute this? Well, if I take a test that has rejection region of the form tn-- because it depends on the data, that's tn of x1 xn-- my observation's larger than some number c. Of course, I can almost always write tests like that, except that sometimes, it's going to be an absolute value, which essentially means I'm going away from some value. Maybe, actually, I'm less than something, but I can always put a negative sign in front of everything. So this is not without much of generality. So this includes something that looks like-- something is larger than the constants, so that means-- which is equivalent to-- well, let me write that as tq, because then that means that-- so that's tn. But this actually encompasses the fact that qn is larger than c or qn is less than c and n minus c. So that includes this guy. That also includes qn less than c, because this is equivalent to qn is larger than minus c. And minus qn is-- and so that's going to be my tn. So I can actually encode several type of things-- rejection regions. So here, in this case, I have a rejection region that looks like this, or a rejection region that looks like this, or a rejection region that looks like this. And here, I don't really represent it for the whole data, but maybe for the average, for example, or the normalized average. So if I write this, then-- yeah. And in this case, this tn that shows up is called test statistic. I mean, this is not set in stone. Here, for example, q could be the test statistic. It doesn't have to be minus q itself that's the test statistic. So what is the test statistic? Well, it's what you're going to build from your data and then compare to some fixed value. So in the example we had here, what is our test statistic? Well, it's this guy. This was our test statistic. And is this thing a statistic? What are the criteria for a statistic? What is the statistic? I know you know the answer. AUDIENCE: Measurable function. PHILIPPE RIGOLLET: Yeah, it's a measurable function of the data that does not depend on the parameter. Is this guy a statistic? AUDIENCE: It's not. PHILIPPE RIGOLLET: Let's think again. When I implemented the test, what did I do? I was able to compute my test. My test did not depend on some unknown parameter. How did we do it? We just plugged in 0.5 here, remember? That was the value for which we computed it, because under h0, that was the value we're seeing. And if theta 0 is actually an entire set, I'm just going to take the value that's the closest to h1. We'll see that in a second. I mean, I did not guarantee that to you. But just taking the worst type I error and bounded by alpha is equivalent to taking p and taking the value of p that's the closest to theta 1, which is completely intuitive. The worst type I error is going to be attained for the p that's the closest to the alternative. So even if the null is actually just an entire set, it's as if it was just the point that's the closest to the alternative. So now we can compute this, because there's no unknown parameters that shows up. We replace p by 0.5. And so that was our test statistic. So when you're building a test, you want to first build a test statistic, and then see what threshold you should be getting. So now, let's go back to our example where we want to have-- we have x1 xn, their IID [INAUDIBLE] p. And I want to test if p is 1/2 versus p not equal to 1/2, which, as I said, is what you want to do if you want to test if a coin is fair. And so here, I'm going to build a test statistic. And we concluded last time that-- what do we want for this statistic? We want it to have a distribution which, under the null, does not depend on the parameters, a distribution that I can actually compute quintiles of. So what we did is, we said, well, if I look at-- the central limit theorem tells me that square root of n xn bar minus p divided by-- so if I do central limit theorem plus Slutsky, for example, I'm going to have square root. And we've had this discussion whether we want to use Slutsky or not here. But let's assume we're taking Slutsky wherever we can. So this thing tells me that, by the central limit theorem, as n goes to infinity, this thing converges in distribution to some n01. Now, as we said, this guy is not something we know. But under the null, we actually know it. And we can actually replace it by 1/2. So this thing holds under h0. When I write under h0, it means when this is the truth. So now I have something that converges to something that has no dependence on anything I don't know. And in particular, if you have any statistics textbook, which you don't because I didn't require one-- and you should be thankful, because these things cost $350. Actually, if you look at the back, you actually have a table for a standard Gaussian. I could have anything else here. I could have an exponential distribution. I could have a-- I don't know-- well, we'll see the chi squared distribution in a minute. Any distribution from which you can actually see a table that somebody actually computed this thing for which you can actually draw the pdf and start computing whatever probability you want on them, then this is what you want to see at the right-hand side. This is any distribution. It's called pivotal. I think we've mentioned that before. Pivotal means it does not depend on anything that you don't know. And maybe it's easy to compute those things. Probably, typically, you need a computer to simulate them for you because computing probabilities for Gaussians is not an easy thing. We don't know how to solve those integrals exactly, we have to do it numerically. So now I want to do this test. My test statistic will be declared to be what? Well, I'm going to reject if what is larger than some number? The absolute value of this guy. So my test statistic is going to be square root of n minus 0.5 divided by square root of xn bar 1 minus xn bar. That's my test statistic, absolute value of this guy, because I want to reject either when this guy is too large or when this guy is too small. I don't know ahead whether I'm going to see p larger than 1/2 or less than 1/2. So now I need to compute c such that the probability that tn is larger than c. So that's the probability under p, which is unknown. I want this probability to be less than some level alpha, asymptotically. So I want the limit of this guy to be less than alpha, and that's the level of my test. So that's the given level. So I want this thing to happen. Now, what I know is that this limit-- actually, I should say given asymptotic level. So what is this thing? Well, OK, that's the probability that something that looks like under p. So under p, this guy-- so what I know is that tn is square root of n minus xn bar minus 0.5 divided by square root of xn bar 1 minus xn bar exceeds. Is this true that as n to infinity, this probability is the same as the probability that the absolute value of a Gaussian exceeds c of a standard Gaussian? Is this true? AUDIENCE: The absolute value of the standard Gaussian. PHILIPPE RIGOLLET: Yeah, the absolute. So you're saying that this, as n becomes large enough, this should be the probability that some absolute value of n01 exceeds c, right? AUDIENCE: Yes. PHILIPPE RIGOLLET: So I claim that this is not correct. Somebody tell me why. AUDIENCE: Even in the limit it's not correct? PHILIPPE RIGOLLET: Even in the limit, it's not correct. AUDIENCE: OK. PHILIPPE RIGOLLET: So what do you see? AUDIENCE: It's because, at the beginning, we picked the worst possible true parameter, 0.5. So we don't actually know that this 0.5 is the mean. PHILIPPE RIGOLLET: Exactly. So we pick this 0.5 here, but this is for any p. But what is the only p I can get? So what I want is that this is true for all p in theta 0. But the only p that's in theta 0 is actually p is equal to 0.5. So yes, what you said was true, but it required to specify p to be equal to 0.5. So this, in general, is not true. But it happens to be true if p belongs to theta 0, which is strictly equivalent to p is equal to 0.5, because theta 0 is really just this one point, 0.5. So now, this becomes true. And so what I need to do is to find c such that this guy is equal to what? I mean, let's just follow. So I want this to be less than alpha. But then we said that this was equal to this, which is equal to this. So all I want is that this guy is less than alpha. But we said we might as well just make it equal to alpha if you allow me to make it as big as I want, as long as it's less than alpha. AUDIENCE: So this is a true statement. PHILIPPE RIGOLLET: So this is a true statement. But it's under this condition. AUDIENCE: Exactly. PHILIPPE RIGOLLET: So I'm going to set it equal to alpha, and then I'm going to try to solve for c. So what I'm looking for is a c such that if I draw a standard Gaussian-- so that's pdf of some n01-- I want the probability that the absolute value of my Gaussian exceeding this guy-- so that means being either here or here. So that's minus c and c. I want the sum of those two things to be equal to alpha. So I want the sum of these areas to equal alpha. So by symmetry, each of them should be equal to alpha over 2. And so what I'm looking for is c such that the probability that my n01 exceeds c, which is just this area to the right, now, equals alpha, which is equivalent to taking c, which is q equals alpha over 2, and that's q alpha over 2 by definition of q alpha over 2. That's just what q alpha over 2 is. And that's what the tables at the back of the book give you. Who has already seen a table for Gaussian probabilities? What it does, it's just a table. I mean, it's pretty ancient. I mean, of course, you can actually ask Google to do it for you now. I mean, it's basically standard issue. But back in the day, they actually had to look at tables. And since the values alphas were pretty standard, the values alpha that people were requesting were typically 1%, 5%, 10%, all you could do is to compute these different values for different values of alpha. That was it. So there's really not much to give you. So for the Gaussian, I can tell you that alpha is equal to-- if alpha is equal to 5%, then q alpha over 2, q 2.5% is equal to 1.96, for example. So those are just fixed numbers that are functions of the Gaussian. So everybody agrees? We've done that before for our confidence intervals. And so now we know that if I actually plug in this guy to be q alpha over 2, then this limit is actually equal to alpha. And so now I've actually constrained this. So q alpha over 2 here for alpha equals 5%, as I said, is 1.96. So in the example 1, the number that we found was 3.54, I think, or something like that, 3.55 for t. So if we scroll back very quickly, 3.45-- that was example 1. Example two-- negative 0.77. So if I look at tn in example 1, tn was just the absolute value of 3.45, which-- don't pull out your calculators-- is equal to 3.45. Example 2, absolute value of negative 0.77 was equal to 0.77. And so all I need to check is, is this number larger or smaller than 1.96? That's what my test ends up being. So in example 1, 3.45 being larger than 1.96, that means that I reject. Fairness of my coins, in example 2, 0.77 being smaller than 1.96-- what do I do? I fail to reject. So here is a question. In example 1, for what level alpha would psi alpha-- OK, so here, what's going to happen if I start decreasing my level? When I decrease my level, I'm actually making this area smaller and smaller, which means that I push this c to the right. So now I'm asking, what is the smallest c I should pick so that now, I actually do not reject h0? What is the smallest c I should be taking here? What is the smallest c? So c here, in the example I gave you for 5%, was 1.96. What is the smallest c I should be taking so that now, this inequality is reversed? 3.45. I ask only trivial questions, don't be worried. So 3.45 is the smallest c that I'm actually willing to tolerate. So let's say this was my 5%. If this was 2.5-- if here, let's say, in this picture, alpha is 5%, that means maybe I need to push here. And this number should be what? So this is going to be 1.96. And this number here is going to be 3.45, clearly to scale. And so now, what I want to ask you is, well, there's two ways I can understand this number 3.45. It is the number 3.45, but I can also try to understand what is the area to the right of this guy. And if I understand what the area to the right of this guy is, this is actually some alpha prime over 2. And that means that if I actually fix this level alpha prime, that would be exactly the tipping point at which I would go from accepting to rejecting. So I knew, in terms of absolute thresholds, 3.45 is the trivial answer to the question. That's the tipping point, because I'm comparing a number to 3.45. But now, if I try to map this back and understand what level would have been giving me this particular tipping point, that's a number between 0 and 1. The smaller the number, the larger this number here, which means that the more evidence I have in my data against h0. And so this number is actually something called the p-value. And so saying, for example 2, there's the tipping point alpha at which I go from failing to reject to rejecting. And that's exactly the number, the area under the curve, such that here, I see 0.77. And this is this alpha prime prime over 2. Alpha prime prime is clearly larger than 5%. So what's the advantage of thinking and mapping back these numbers? Well, now, I'm actually going to spit out some number which is between 0 and 1. And that should be the only scale you should have in mind. Remember, we discussed that last time. I was like, well, if I actually spit out a number which is 3.45, maybe you can try to think, is 3.45 a large number for a Gaussian? That's a number. But if I had another random variable that was not Gaussian, maybe it was a double exponential, you would have to have another scale in your mind. Is 3.45 so large that it's unlikely for it to come from a double exponential. If I had a gamma distribution-- I can think of any distribution, and then that means, for each distribution, you would have to have scale in mind. So of course, you can have the Gaussian scale in mind. I mean, I have the Gaussian scale in mind. But then, if I map it back into this number between 0 and 1, all the distributions play the same role. So whether I'm talking about if my limiting distribution is normal or exponential or gamma, or whatever you want, for all these guys, I'm just going to map it into one number between 0 and 1. Small number means lots of evidence against h1. Large number means lots of evidence against h0. Small number means very few evidence against h9. And this is the only number you need to keep in mind. And the question is, am I willing to tolerate this number between 5%, 6%, or maybe 10%, 12%? And this is the only scale you have to have in mind. And this scale is the scale of p-values. So the p-value is the tipping point in terms of alpha. In words, I can make it formal, because tipping point, as far as I know, is not a mathematical term. So a p-value of a test is the smallest, potentially asymptotic level if I talk about an asymptotic p-value-- and that's what we do when we talk about central theorem-- at which the test rejects h0. If I were to go any smaller-- sorry, it's the smallest level-- yeah, if I were to go any smaller, I would fail to reject. The smaller the level, the less likely it is for me to reject. And if I were to go any smaller, I would start failing to reject. And so it is a random number. It depends on what I actually observe. So here, of course, I instantiated those two numbers, 3.45 and 0.77, as realizations of random variables. But if you think of those as being the random numbers before I see my data, this was a random number, and therefore, the area under the curve to the right of it is also a random area. If this thing fluctuates, then the area under the curve fluctuates. And that's what the p-value is. That's what-- what is his name? I forget. John Oliver talks about when he talks about p-hacking. And so we talked about this in the first lecture. So p-hacking is, how do I do-- oh, if I'm a scientist, do I want to see a small p-value or a large p-value? AUDIENCE: Small. PHILIPPE RIGOLLET: Small, right? Scientists want to see small p-values because small p-values equals rejecting, which equals discovery, which equals publications, which equals promotion. So that's what people want to see. So people are tempted to see small p-values. And what's called p-hacking is, well, find a way to cheat. Maybe look at your data, formulate your hypothesis in such a way that you will actually have a smaller p-value than you should have. So here, for example, there's one thing I did not insist on because, again, this is not a particular course on statistical thinking, but one thing that we implicitly did was set those theta 0 and theta 1 ahead of time. I fixed them, and I'm trying to test this. This is to be contrasted with the following approach. I draw my data. So I draw-- I run this experiment, which is probably going to get me a publication in nature. I'm trying to test if a coin is fair. And I draw my data, and I see that there's 13 out of 30 of my observations that are heads. That means that, from this data, it looks like p is less than 1/2. So if I look at this data and then decide that my alternative is not p not equal to 1/2, but rather p less than 1/2, that's p-hacking. I'm actually making my p-value strictly smaller by first looking at the data, and then deciding what my alternative is going to be. And that's cheating, because all the things we did, we're assuming that this 0.5, or the alternative, was actually a fixed-- everything was deterministic. The only randomness came from the data. But if I start looking at the data and designing my experiment or my alternatives and null hypothesis based on the data, it's as if I started putting randomness all over the place. And then I cannot control it because I don't know how it just intermingles with each other. So that was for the John Oliver moment. So the p-value is nice. So maybe I mentioned that, before, my wife works in market research. And maybe every two years, she seems to run into a statistician in the hallway, and she comes home and says, what is a p-value again? And for her, a p-value is just the number in an Excel spreadsheet. And actually, small equals good and large equals bad. And that's all she needs to know at this point. Actually, they do the job for her-- small is green, large is red. And so for her, a p-value is just green or red. But so what she's really implicitly doing with this color code is just applying the golden rule. What the statisticians do for her in the Excel spreadsheet is that they take the numbers for the p-values that are less than some fixed level. So depending on the field in which she works-- so she works for pharmaceutical companies-- so the p-values are typically compared-- the tests are usually performed at level 1%, rather than 5%. So 5% is maybe your gold standard if you're doing sociology or trying to-- I don't know-- release a new blueberry flavor for your toothpaste. Something that's not going to change the life of people, maybe you're going to run at 5%. It's OK to make a mistake. See, people are just going to feel gross, but that's about it, whereas here, if you have this p-value which is less than 1%, it might be more important for some drug discovery, for example. And so let's say you run at 1%. And so what they do in this Excel spreadsheet is that all the numbers that are below 1% show up in green and all the numbers that are above 1% show up in red. And that's it. That's just applying the golden rule. If the number is green, reject. If the number is red, fail to reject. Yeah? AUDIENCE: So going back to example 2 where the prior example where you want to cheat by looking after beta and then formulating, say, theta 1 to be p less than 1/2. PHILIPPE RIGOLLET: Yeah. AUDIENCE: So how would you achieve your goal by changing the theta-- PHILIPPE RIGOLLET: By achieving my goal, you mean letting ethics aside, right? AUDIENCE: Yeah, yeah. PHILIPPE RIGOLLET: Ah, you want to be published. AUDIENCE: Yeah. PHILIPPE RIGOLLET: [LAUGHS] So let me teach you how, then. So well, here, what do you do? You want to-- at the end of the day, a test is only telling you whether you found evidence in your data that h1 was more likely than h0, basically. How do you make h1 more likely? Well, you just basically target h1 to be what it is-- what the data is going to make it more likely to be. So if, for example, I say h1 can be on both sides, then my data is going to have to take into account fluctuations on both sides, and I'm going to lose a factor or two somewhere because things are not symmetric. Here is the ultimate way of making this work. I'm going back to my example of flipping coins. And now, so here, what I did is, I said, oh, this number 0.43 is actually smaller than 0.5, so I'm just going to test whether I'm 0.5 or I'm less than 0.5. But here is something that I can promise you I did not make the computation will reject. So here, this one actually-- yeah, this one fails to reject. So here is one that will certainly reject. h0 is 0.5, p is 0.5, h1p is 0.43. Now, you can try, but I can promise you that your data will tell you that h1 is the right one. I mean, you can check very quickly that this is really extremely likely to happen. Actually, what am I-- no, actually, that's not true, because here, the test that I derive that's based on this kind of stuff, here at some point, somewhere under some layers, I assume that all our tests are going to have this form. But here, this is only when you're trying to test one region versus another region next to it, or one point versus a region around it, or something like this, whereas for this guy, there's another test that could come up with, which is, what is the probability that I get 0.43, and what is the probability that I get 0.5? Now, what I'm going to do is, I'm going to just conclude it's whichever has the largest probability. Then maybe I'm going to have to make some adjustments so that the level is actually 5%. But I can make this happen. I can make the level be 5% and always conclude this guy, but I would have to use a different test. Now, the test that I described, again, those tn larger than c are built in to be tests that are resilient to these kind of manipulations because they're oblivious towards what the alternative looks like. I mean, they're just saying it's either to the left or to the right, but whether it's a point or an entire half-line doesn't matter. So if you try to look at your data and just put the data itself into your hypothesis testing problem, then you're failing the statistical principle. And that's what people are doing. I mean, how can I check? I mean, of course, here, it's going to be pretty blatant if you publish a paper that looks like this. But there's ways to do it differently. For example, one way to do it is to just do mult-- so typically, what people do is they do multiple hypothesis testing. They're doing 100 tests at a time. Then you have random fluctuations every time. And so they just pick the one that has the random fluctuations that go their way. I mean, sometimes it's going in your way, and sometimes it's going the opposite way, so you just pick the one that works for you. We'll talk about multiple hypothesis testing soon if you want to increase your publication count. There's actually papers-- I think it was a big news that some papers, I think, in psychology or psychometrics papers that actually refused to publish p-values now. Where were we? Here's the golden rule. So one thing that I like to show is this thing, just so you know how you apply the golden rule and how you apply the standard tests. So the standard paradigm is the following. You have a black box, which is your test. For my wife, this is the 4th floor of the building. That's where the statisticians sit. What she sends there is data-- let's say x1 xn. And she says, well, this one is about toothpaste, so here's a level-- let's say 5%. What the 4th floor brings back is that answer-- yes, no, green, red, just an answer. So that's the standard testing. You just feed it the data and the level at which you want to perform the test, maybe asymptotic, and it spits out a yes, no answer. What p-value does, you just feed it the data itself. And what it spits out is the p-value. And now it's just up to you. I mean, hopefully your brain has the computational power of deciding whether a number is larger or smaller than 5% without having to call a statistician for this. And that's what it does. So now we're on 1 scale. Now, I see some of you nodding when I talk about p-hacking, so that means you've seen p-values. If you've seen more than 100 p-values in your life, you have an entire scale. A good p-value is less than 10 to the minus 4. That's the ultimate sweet spot. Actually, statistical software spits out an output which says less than 10 to the minus 4. But then maybe you want a p-val-- if you tell me my p-value was 4.65, then I will say, you've been doing some p-hacking until you found a number that was below 5%. That's typically what people will do. But if you tell me-- if you're doing the test, if you're saying, I published my result, my test at 5% said yes, that means that maybe you're p-value was 4.99, or you're p-value was 10 to the minus 4, I will never know. I will never know how much evidence you had against the null. But if you tell me what the p-value is, I can make my own decision. I don't have to tell me whether it's a yes, no. You tell me it's 4.99, I'm going to say, well, maybe yes, but I'm going to take it with a grain of salt. And so that's why p-values are good numbers to have in mind. Now, I should, as if it was like an old trick that you start mastering when you're 45 years old. No, it's just, how small is the number between 0 and 1? That's really what you need to know. Maybe on the log scale-- if it's 10 to the minus 1, 10 to the minus 2, 10 to the minus 3, et cetera-- that's probably the extent of the mastery here. So this traditional standard paradigm that I showed is actually commonly referred to as the Neyman-Pearson paradigm. So here, it says name Neyman-Pearson's theory, so there's an entire theory that comes with it. But it's really a paradigm. It's a way of thinking about hypothesis testing that says, well, if I'm not going to be able to optimize both my type I and type II error, I'm actually going to lock in my type I error below some level and just minimize the type II error under this constraint. That's what the Neyman-Pearson paradigm is. And it sort of makes sense for hypothesis testing problems. Now, if you were doing some other applications with multi-objective optimization, you would maybe come up with something different. For example, machine learning is not performing typically under Neyman-Pearson paradigm. So if you do spam filtering, you could say, well, I want to constrain the probability as much as I can of taking somebody's important emails and throwing them out as spam, and under this constraint, not send too much spam to that person. That sort of makes sense for spams. Now, if you're labeling cats versus dogs, that's probably not like you want to make sure that no more than 5% of the dogs are labeled cat because, I mean, it doesn't matter. So what you typically do is, you just sum up the two types of errors you can make, and you minimize the sum without putting any more weight on one or the other. So here's an example where doing a binary decision, one or two of the errors you can make, you don't have to actually be like that. So this example here, I did not. The trivial test psi is equal to 0, what was it in the US trial court example? What is psi equals 0? That was concluding always to the null. What was the null? AUDIENCE: Innocent. PHILIPPE RIGOLLET: Innocent, right? That's the status quo. So that means that this guy never rejects h0. Everybody's going away free. So you're sure you're not actually going against the constitution because alpha is 0%, which is certainly less than 5%. But the power, the fact that a lot of criminals go back outside in the free world is actually formulated in terms of low power, which, in this case, is actually 0. Again, the power is the number between 0 and 1. Close to 0, good. Close to 1, bad. Now, what is the definition of the p-value? That's going to be something-- it's a mouthful. The definition of the p-value is a mouthful. It's the tipping point. It is the smallest level at which blah, blah, blah, blah, blah. It's complicated to remember it. Now, I think that my 6th explanation, my wife, after saying, oh, so it's the probability of making an error, I said, yeah, that's the probability of making an error because, of course, she can think probability of making an error small, good, large, bad. So that's actually a good way to remember. I'm pretty sure that at least 50% of people who are using p-values out there think that the p-value is the probability of making an error. Now, for all matters of purposes, if your goal is to just threshold the p-value, this is OK to have this in y. But when comes, at least until December 22, I would recommend trying to actually memorize the right definition for the p-value. So the idea, again, is fix the level and try to optimize the power. So we're going to try to compute some p-values from now on. How do you compute the p-value? Well, you can actually see it from this picture over there. One thing I didn't show on this picture-- so here, it was my q alpha over 2 that had alpha here, alpha over 2 here. That was my q alpha over 2. And I said, if tn is to the right of this guy, I'm going to reject. If tn is to the left of this guy, I'm going to fail to reject. Pictorially, you can actually represent the p-value. It's when I replace this guy by tn itself. Sorry, that's p-value over 2. No, actually, that's p-value. So let me just keep it like that and put the absolute value here. So if you replace the role of q alpha over 2, by your test statistic, the area under the curve is actually the p-value itself up to a scale because of the symmetric thing. So there's a good way to see, pictorially, what the p-value is. It's just the probability that some Gaussians-- it's just the probability that some absolute value of n01 exceeds tn. That's what the p-value is. Now, this guy has nothing to do with this guy, so this is really just 1 minus the Gaussian cdf of tn, and that's it. So that's how I would compute p-values. Now, as I said, the p-value is a beauty because you don't have to understand the fact that your limiting distribution is a Gaussian. It's already factored in this construction. The fact that I'm actually looking at this cumulative distribution function of a standard Gaussian makes my p-value automatically adjust to what the limiting distribution is. And if this was the cumulative distribution function of a exponential, I would just have a different function here denoted by f, for example, and I would just compute a different value. But in the end, regardless of what the limiting value is, my p-value would still be a number between 0 and 1. And so to illustrate that, let's look at other weird distributions that we could get in place of the standard Gaussian. And we're not going to see many, but we'll see one. And it's not called the chi squared distribution. It's actually called the Student's distribution, but it involves the chi squared distribution as a building block. So I don't know if my phonetics are not really right there, so I try to say, well, it's chi squared. Maybe it's "kee" squared above, in Canada, who knows. So for a positive integer, so there's only 1 parameter. So for the Gaussian, you have 2 parameters, which are mu and sigma squared. Those are real numbers. Sigma squared's positive. Here, I have 1 integer parameter. Then the chi squared distribution with d degrees of freedom-- so the parameter is called a degree of freedom, just like mu is called the expected value and sigma squared is called the variance. Here, we call it degrees of freedom. You don't have to really understand why. So that's the law that you would get-- that's the random variable you would get if you were to sum d squares of independent standard Gaussians. So I take the square of an independent random Gaussian. I take another one. I sum them, and that's a chi squared with 2 degrees of freedom. That's how you get it. Now, I could define it using its probability density function. I mean, after all, this is the sum of positive random variables, so it is a positive random variable. It has a density on the positive real line. And the pdf of chi squared with d degrees of freedom is what? Well, it's fd of x is-- what is it?-- x to the d/2 minus 1 e to the minus x/2. And then here, I have a gamma of d/2. And the other one is, I think, 2 to the d/2 minus 1. No, 2 to the d/2. That's what it is. That's the density. If you are very good at probability, you can make the change of variable and write your Jacobian and do all this stuff and actually check that this is true. I do not recommend doing that. So this is the density, but it's better understood like that. I think it was just something that you built from standard Gaussian. So for example, an example of a chi squared with 2 degrees of freedom is actually the following thing. Let's assume I have a target like this. And I don't aim very well. And I'm trying to hit the center. And I'm not going to have, maybe, a deviation, which is standard Gaussian left, right and standard Gaussian north, south. So I'm throwing, and then I'm here, and I'm claiming that this number here, by Pythagoras theorem, the square distance here is the sum of this square distance here, which is the square of a Gaussian by assumption. This is plus the square of this distance, which is the square of another independent Gaussian. I assume those are independent. And so the square distance from this point to this point is the chi squared with 2 degrees of freedom. So this guy here is n01 squared. This is n01 squared. And so this guy here, this distance here, is chi squared with 2 degrees of freedom. I mean the square distance. I'm talking about square distances here. So now you can see that, actually, Pythagoras is basically why chi squared [? arrives. ?] That's why it has its own name. I mean, I could define this random variable. I mean, it's actually a gamma distribution. It's a special case of something called the gamma distribution. The fact that the special case has its own name is because there's many times what we're going to take sum of squares of independent Gaussians because Gaussians, the sum of squares is really the norm, the Euclidean norm squared, just by Pythagoras theorem. If I'm in higher dimension, I can start to sum more squared coordinates, and I'm going to measure the norm squared. So if you want to draw this picture, it looks like this. Again, it's the sum of positive numbers, so it's going to be on 0 plus infinity. That's fd. And so f1 looks like this, f2 looks like this. So the tails become heavier and heavier as d increases. And then at [INAUDIBLE] to 3, it starts to have a different shape. It starts from 0 and it looks like this. And then, as d increases, it's basically as if you were to push this thing to the right. It's just like, psh, so it's just falling like a big blob. Everybody sees what's going on? So there's just this fat thing that's just going there. What is the expected value of a chi squared? So it's the expected value of the sum of Gaussian random variables, squared. I know I said that. AUDIENCE: So it's the sum of their second moments, right? PHILIPPE RIGOLLET: Which is? Those are n01. AUDIENCE: It's like-- oh, I see, 1. PHILIPPE RIGOLLET: Yeah. AUDIENCE: So n times 1 or d times 1. PHILIPPE RIGOLLET: Yeah, which is d. So one thing you can check quickly is that the expected value of a chi squared is d. And so you see, that's why the mass is shifting to the right as d increases. It's just going there. Actually, the variance is also increasing. The variance is 2d. So this is one thing. And so why do we care about this? In basic statistics, it's not like we actually have statistics much about throwing darts at high-dimensional boards. So what's happening is that if I look at the sample variance, the average of the sum of squared centered by their mean, then I can actually expend this as the sum of the squares minus the average squared It's just the same trick that we have for the variance-- second moment minus first moment square. And then I claim that Cochran's theorem-- and I will tell you in a second what Cochran's theorem tells me is that this sample variance is actually-- so if I had only this-- look at those guys. Those guys are Gaussian with mean mu and variance sigma squared. Think for 1 second mu being 0 and sigma squared being 1. Now, this part would be a chi squared with n degrees of freedom divided by n. Now I get another thing here, which is the square of something that looks like a Gaussian as well. So it looks like I have something else here, which looks also like a chi squared. Now, Cochran's theorem is essentially telling you that those things are independent, and so that in a way, you can think of those guys as being, here, n degrees of freedom minus 1 degree of freedom. Now, here, as I said, this does not mean 0 and variance 1. The fact that it's not mean 0 is not a problem because I can remove the mean here and remove the mean here. And so this thing has the same distribution, regardless of what the actual mean is. So without loss of generality, I can assume that mu is equal to 0. Now, the variance, I'm going to have to pay, because if I multiply all these numbers by 10, then this sn is going to multiplied by 100. So this thing is going to scale with the variance. And not surprisingly, it's scaling like the square of the variance. So if I look at sn, it's distributed as sigma squared times the chi squared with n minus 1 degrees of freedom divided by n. And we don't really write that, because a chi squared times sigma squared divided by n is not a distribution, so we put everything to the left, and we say that this is actually a chi squared with n minus 1 degrees of freedom. So here, I'm actually dropping a fact on you, but you can see the building block. What is the thing that's fuzzy at this point, but the rest should be crystal clear to you? The thing that's fuzzy is that removing this squared guy here is actually removing 1 degree of freedom. That should be weird, but that's what Cochran's theorem tells. It's essentially stating something about orthogonality of subspaces with the span of the constant vector, something like that. So you don't have to think about it too much, but that's what it's telling me. But the rest, if you plug in-- so the scaling in sigma squared and in n, so that should be completely clear to you. So in particular, if I remove that part, it should be clear to you that this thing, if mean is 0, this thing is actually distributed. Well, if mu is 0, what is the distribution of this guy? So I remove that part, just this part. So I have xi, which are n0 sigma squared. And I'm asking, what is the distribution of 1/n sum from i equal 1 to n of xi squared? So it is the sum of their IID. So it's the sum of independent Gaussians, but not standard. So the first thing to make them standard is that I divide all of them by sigma squared. Now, this guy is of the form zi squared where zi is n01. So now, this thing here has what distribution? AUDIENCE: Chi squared n. PHILIPPE RIGOLLET: Chi squared n. And now, sigma squared over n times chi squared n-- so if I have sigma squared divided by n times chi squared-- sorry, so n times n divided by sigma squared. So if I take this thing and I multiply it by n divided by sigma squared, it means I remove this term, and now I am left with a chi squared with n degrees of freedom. Now, the effect of centering with the sample mean here is only to lose 1 degree of freedom. That's it. So if I want to do a test about variance, since this is supposedly a good estimator of variance, this could be my pivotal distribution. This could play the role of a Gaussian. If I want to know if my variance is equal to 1 or larger than 1, I could actually build a test based on this only statement and test if the variance is larger than 1 or not. Now, this is not asymptotic because I started with the very assumption that my data was Gaussian itself. Now, just a side remark-- you can check that this chi squared 2, 2 is an exponential with 1/2 degrees of freedom, which is certainly not clear from the fact that z1 squared plus z2 squared is a chi squared with 2 degrees of freedom. if I give you the sum of the square of 2 independent Gaussian, this is actually an exponential. That's not super clear, right? But if you look at what was here-- I don't know if you took notes, but let me rewrite it for you. So it was x to the d/2 minus 1 e to the minus x/2 divided by 2 to the d/2 gamma of d/2. So if I plug in d is equal to 2, gamma of 2/2 is gamma of 1, which is 1. It's factorial of 0. So it's 1, so this guy goes away. 2 to the d/2 is 2 to 1, so that's just 1. No, that's just 2. Then x the d/2 minus 1 is x the 0, goes away. And so I have x minus x/2 1/2, which is really, indeed, of the form lambda e to the minus lambda x for lambda is equal to 1/2, which was our exponential distribution. Well, next week is, well, Columbus Day? So not next Monday-- so next week, we'll talk about Student's distribution. And so that was discovered by a guy who pretended his name was Student, but was not Student. And I challenge you to find why in the meantime. So I'll see you next week. Your homework is going to be outside so we can release the room.
MIT_18650_Statistics_for_Applications_Fall_2016
11_Parametric_Hypothesis_Testing_cont_and_Testing_Goodness_of_Fit.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we've been talking about this chi square test. And the name chi square comes from the fact that we build a test statistic that has asymptotic distribution given by the chi square distribution. Let's just give it another shot. OK. This test. Who has actually ever encountered the chi square test outside of a stats classroom? All right. So some people have. It's a fairly common test that you might encounter. And it was essentially to test, if given some data with a fixed probability mass function, so a discrete distribution, you wanted to test if the PMF was equal to a set value, p0, or if it was different from p0. And the way the chi square arose here was by looking at Wald's test. And essentially if you write-- so Wald's is the one that has the chi square as the limiting distribution, and if you invert the covariance matrix, the asymptotic covariance matrix, so you compute the Fisher information, which in this particular case does not exist for the multinomial distribution, but we found the trick on how to do this. We remove the part that forbid it to be invertible, then we found this chi square distribution. In a way we have this test statistic, which you might have learned as a black box, laundry list, but going through the math which might have been slightly unpleasant, I acknowledge, but really told you why you should do this particular normalization. So since some of you requested a little more practical examples of how those things work, let me show you a couple. The first one is you want to answer the question, well, you know, when should I be born to be successful. Some people believe in zodiac, and so Fortune magazine actually collected the signs of 256 heads of the Fortune 500. Those were taken randomly. And they were collected there, and you can see the count of number of CEOs that have a particular zodiac sign. And if this was completely uniformly distributed, you should actually get a number that's around 256 divided by 12, which in this case is 21.33. And you can see that there is numbers that are probably in the vicinity, but look at this guy. Pisces, that's 29. So who's Pisces here? All right. All right, so give me your information and we'll meet again in 10 years. And so basically you might want to test if actually the fact that it's uniformly distributed is a valid assumption. Now this is clearly a random variable. I pick a random CEO and I measure what his zodiac sign is. And I want to know, so it's a probability over, I don't know, 12 zodiac signs. And I want to know if it's uniform or not. Uniform sounds like it should be the status quo, if you're reasonable. And maybe there's actually something that moves away. So we could do this, in view of these data is there evidence that one is different. Here is another example where you might want to apply the chi square test. So as I said, the benchmark distribution was the uniform distribution for the zodiac sign, and that's usually the one I give you. 1 over k, 1 over k, because well that's sort of the zero, the central point for all distributions. That's the point, the center of what we call the simplex. But you can have another benchmark that sort of makes sense. So for example this is an actual dataset where 275 jurors were identified, racial group were collected, and you actually might want to know if you know juries in this country are actually representative of the actual population. And so here of course, the population is not uniformly distributed according to racial group. And the way you actually do it is you actually go on Wikipedia, for example, and you look at the demographics of the United States, and you find that the proportion of white is 72%, black is 7%, Hispanic is 12, and other is about 9%. So that's a total of 1. And this is what we actually measured for some jurors. So for this guy, you can actually run the chi square test. You have the estimated proportion, which comes from this first line. You have the tested proportion, p0, that comes from the second line, and you might want to check if those things actually correspond to each other. OK, so I'm not going to do it for you, but I sort of invite you to do it and test, and see how this compares to the quantiles of the appropriate chi square distribution and see what you can conclude from those two things. All right. So this was the multinomial case. So this is essentially what we did. We computed the MLE under the right constraint, and that was our test statistic that converges to the chi square distribution. So if you've seen it before, that's all that was given to you. Now we know why the normalization here is p0 j and not p0 j squared or square root of p0 j, or even 1. I mean it's not clear that this should be the right normalization, but we know that's what comes from taking the right normalization, which comes from the Fisher information. All right? OK. The thing I wanted to move onto, so we've basically covered chi square test. Are there any questions about chi square test? And for those of you who were not here on Thursday, I'm really just-- do not pretend I just did it. That's something we did last Thursday. But are there any questions that arose when you were reading your notes, things that you didn't understand? Yes. AUDIENCE: Is there like a formal name? Before we had talked about how what we call the Fisher information [INAUDIBLE],, still has the same [INAUDIBLE] because it's the same number. PROFESSOR: So it's not the Fisher. The Fisher information does not exist in this case. And so there's no appropriate name for this. It's the pseudoinverse of the asymptotic covariance matrix, and that's what it is. I don't know if I mentioned it last time, but there's this entire field that uses-- you know, for people who really aspire to differential geometry but are stuck in the stats department, and there's this thing called information geometry, which is essentially studying the manifolds associated to the Fisher information metric, the metric that's associated to Fisher information. And so those of course can be lower dimensional manifolds, not only distorts the geometry but forces everything to live on a lower dimension, which is what happens when your Fisher information does not exist. And so there's a bunch of things that you can study, what this manifold looks like, et cetera. But no, there's no particular terminology here about going here. To be fair, within the scope of this class, this is the only case where you-- multinomial case is the only case where you typically see a lack of a Fisher information matrix. And that's just because we have these extra constraints that the sum of the parameters should be 1. And if you have an extra constraint that seems like it's actually remove one degree of freedom, this will happen inevitably. And so maybe what you can do is reparameterize. So if I actually reparameterize everything function of p1 to p k minus 1, and then 1 minus the sum, this would not have happened. Because I have only a k-dimensional space. So there's tricks around this to make it exist if you want it to exist. Any other question? All right. So let's move on to Student's t-test. We mentioned it last time. So essentially you've probably done it more even in the homework than you've done it in lectures, but just quickly this is essentially the test. That's the test when we have an actual data that comes from a normal distribution. There is no Central Limit Theorem that exists. This is really to account for the fact that for smaller sample sizes, it might be the case that it's not exactly true that when I look at xn bar minus mu divided by-- so if I look at xn bar minus mu divided by sigma times square root of n, then this thing should have N 0, 1 distribution approximately. Right? By the Central Limit Theorem. So that's for n large. But if n is small, then it's still true when the data is N mu, sigma squared, then it's true that square root of n-- so here it's approximately. And this is always true. But I don't know sigma in practice, right? Maybe mu, it comes from my, maybe mu comes from my mu 0, maybe something from the test statistic where mu actually is here. But for this guy I'm going to have inevitably to find an estimator. And now in this case, for small n, this is no longer true. And what the t statistic is doing is essentially telling you what the distribution of this guy is. So what you should say is that now this guy has a t distribution with n minus 1 degrees of freedom. That's basically the laundry list stats that you would learn. It says just look at a different table, that's what it is. But we actually defined what a t distribution was. And a t distribution is basically something that has the same distribution as some N 0, 1, divided by the square root of a chi square with d degrees of freedom divided by d. And that's a t distribution with d degrees of freedom. And those two have to be independent. And so what I need to check is that this guy over there is of this form. OK? So let's look at the numerator. Well, square root of n, xn bar minus mu. What is the distribution of this thing? Is it an N 0, 1? AUDIENCE: N 0, sigma squared? PROFESSOR: N 0, sigma squared, right. So I'm not going to put it here. So if I want this guy to be N 0, 1, I need to divide by sigma, that's what we have over there. So that's my N 0, 1 that's going to play the role of this guy here. So if I want to go a little further, I need to just say, OK, now I need to have square root of n, and I need to find something here that looks like my square root of chi square divided by-- yeah? AUDIENCE: Really quick question. The equals sign with the d on top, that's just defined as? PROFESSOR: No, that's just the distribution. So, I don't know. AUDIENCE: Then never mind. PROFESSOR: Let's just write it like that, if you want. I mean, that's not really appropriate to have. Usually you write only one distribution on the right-hand inside of this little thing. So not just this complicated function of distributions. This is more like to explain. OK, and so usually the thing you should say that t is equal to this X divided by square root of Z divided by d where X has normal distribution, Z has chi square distribution with d degrees of freedom. So what do we need here? Well I need to have something which looks like my sigma hat, right? So somehow inevitably I'm going to need to have sigma hat. Now of course I need to divide this by my sigma so that my sigma goes away. And so now this thing here-- sorry, I should move on to the right, OK. And so this thing here, so sigma hat is square root of Sn. And now I'm almost there. So this thing is actually equal to square root of n. But this thing here is actually not a-- so this thing here follows a distribution which is actually a chi square, square root of a chi square distribution divided by n. Yeah, that's the square root chi square distribution with n minus 1 degrees of freedom divided by n, because sigma hat is equal to 1 over n sum from i equal 1 to n, xi minus x bar squared. And we just said that this part here was a chi square distribution. We didn't just say it, we said it a few lectures years back, that this thing was a chi square distribution, and the fact that the presence of this x bar here was actually removing one degree of freedom from this sum. OK, so this guy here has the same distribution as a chi square n minus 1 divided by n. So I need to actually still arrange this thing a little bit to have a t distribution. I should not see n here, but I should n minus 1. The d is the same as this d here. And so let me make the correction so that this actually happens. Well, if I actually write this to be equal to-- so if I write square root of n minus 1, as on the slide, times xn bar minus mu divided by-- well let me write it as square root of Sn, which is my sigma hat. Then what this thing is actually equal to, it follows a N 0, 1, divided by the square root of my chi square distribution with n minus 1 degrees of freedom. And here the fact that I multiply by square root of n minus 1, and I have the square root of n here, is essentially the same as dividing here by n minus 1. And that's my tn distribution. My t distribution with n minus 1 degrees of freedom. Just by definition of what this thing is. OK? All right. Yes? AUDIENCE: Where'd you get the square root from? PROFESSOR: This guy? Oh sorry, that's sigma squared. Thank you. That's the estimator of the variance, not the estimator of the standard deviation. And when I want to divide it I divide by standard deviation. Thank you. Any other question or remark? AUDIENCE: Shouldn't you divide by sigma squared? The actual. The estimator for the variance is equal to sigma squared times chi square, right? PROFESSOR: The estimator for the variance. Oh yes, you're right. So there's a sigma squared here. Is that what you're asking? AUDIENCE: Yeah. PROFESSOR: Yes, absolutely. And that's where, it get cancels here. It gets canceled here. OK? So this is really a sigma squared times chi square. OK. So the fact that it's sigma squared is just because I can pull out sigma squared and just think those guys N 0, 1. All right. So that's my t distribution. Now that I actually have a pivotal distribution, what I do is that I form the statistic. Here I called it Tn tilde. OK. And what is this thing? I know that this has a pivotal distribution. So for example, I know that the probability that Tn tilde in absolute value exceeds some number that I'm going to call q alpha over 2 for the t n minus 1, is equal to alpha. So that's basically, remember the t distribution has the same shape as the Gaussian distribution. What I'm finding is, for this t distribution, some number q alpha over 2 of t n minus 1 and minus q alpha over 2 of t minus 1. So those are different from the Gaussian one. Such that the area under the curve here is alpha over 2 on each side so that the probability that my absolute value exceeds this number is equal to alpha. And that's what I'm going to use to reject the test. So now my test becomes, for H0, say mu is equal to some mu 0, versus H1, mu is not equal to mu 0. The rejection region is going to be equal to the set on which square root of n minus 1 times xn bar minus mu 0 this time, divided by square root of Sn exceeds, in absolute value, exceeds q-- sorry that's already here-- exceeds q alpha over 2 of t n minus 1. So I reject when this thing increases. The same as the Gaussian case, except that rather than reading my quantiles from the Gaussian table I read them from the Student table. It's just the same thing. So they're just going to be a little bit farther. So this guy here is just going to be a little bigger than the one for the Gaussian one, because it's going to require me a little more evidence in my data to be able to reject because I have to account for the fluctuations of sigma hat. So of course Student's test is used everywhere. People use only t tests, right? If you look at any data point, any output, even if you had 500 observations, if you look at the statistical software output it's going to say t test. And the reason why you see t test is because somehow it's felt like it's not asymptotic. You don't need to actually do, you know, to be particularly careful. And anyway, if n is equal to 500, since the two curves are above each other it's basically the same thing. So it doesn't really change anything. So why not use the t test? So it's not asymptotic. It doesn't require Central Limit Theorem to kick in. And so in particular it be run if you have 15 observations. Of course, the drawback of the Student test is that it relies on the assumption that the sample is Gaussian, and that's something we really need to keep in mind. If you have a small sample size, there is no magic going on. It's not like Student t test allows you to get rid of this asymptotic normality. It sort of assumes that it's built in. It assumes that your data has a Gaussian distribution. So if you have 15 observations, what are you going to do? You want to test if the mean is equal to 0 or not equal to 0, but you have only 15 observations. You have to somehow assume that your data is Gaussian. But if the data is given to you, this is not math, you actually have to check that it's Gaussian. And so we're going to have to find a test that, given some data, tells us whether it's Gaussian or not. If I have 15 observations, 8 of them are equal to plus 1 and 7 of them are equal to minus 1, then it's pretty unlikely that you're going to be able to conclude that your data has a Gaussian distribution. However, if you see some sort of spread around some value, you form a histogram maybe and it sort of looks like it's a Gaussian, you might want to say it's Gaussian. And so how do we make this more quantitative? Well, the sad answer to this question is that there will be some tests that make it quantitative, but here, if you think about it for one second, what is going to be your null hypothesis? Your null hypothesis, since it's one point, it's going to be that it's Gaussian, and then the alternative is going to be that it's not Gaussian. So what it means is that, for the first time in your statistician life, you're going to want to conclude that H0 is the true one. You're definitely not going to want to say that it's not Gaussian, because then everything you know is sort of falling apart. And so it's kind of a weird thing where you're sort of going to be seeking tests that have no power basically. You're going to want to test that, and that's the nature. The amount of alternatives, the number of ways you can be not Gaussian, is so huge that all tests are sort of bound to have very low power. And so that's why people are pretty happy with the idea that things are Gaussian, because it's very hard to find a test that's going to reject this hypothesis. And so we're even going to find some tests that are visual, where you're going to be able to say, well, sort of looks Gaussian to me. It allows you to deal with the borderline cases pretty efficiently. We'll see actually a particular example. All right, so this theory of testing whether data comes from a particular distribution is called goodness of fit. Is this distribution a good fit for my data? That's the goodness of fit test. We have just seen a goodness of fit test. What was it? Yeah. The chi square test, right? The case square test, we were given a candidate PMF and we were testing if this was a good fit for our data. That was a goodness of fit test. So of course multinomial is one example, but really what we have in the back of our mind is I want to test if my data is Gaussian. That's basically the usual thing. And just like you always see t test as the standard output from statistical software whether you ask for it or not, there will be a test for normality whether you ask it or not from any statistical software app. All right. So a goodness of fit test looks as follows. There's a random variable X and you're given i.i.d. copies of X, X1 to Xn, they come from the same distribution. And you're going to ask the following question: does X have a standard normal distribution? So for t distribution that's definitely the kind of questions you may want to ask. Does X have a uniform distribution on 0, 1? That's different from the distribution 1 over k, 1 over k, it's the continuous notion of uniformity. And for example, you might want to test that-- so there's actually a nice exercise, which is if you look at the p-values. So we've defined what the p-values were. And the p-value's a number between 0 and 1, right? And you could actually ask yourself, what is the distribution of the p-value under the null? So the p-value is a random number. It's the probability-- so the p-value-- let's look at the following test. H0, mu is equal to 0, versus H1, mu is not equal to 0. And I know that the p-value is-- so I'm going to form what? I'm going to look at Xn bar minus mu times square root of n divided by-- let's say that we know sigma for one second. Then the p-value is the probability that this is larger then square root of n little xn bar minus mu, minus 0 actually in this case, divided by sigma, where this guy is the observed. OK. So now you could say, well, how is that a random variable? It's just a number. It's just a probability of something. But then I can view this as a function of this guy here when I plug it back to be a random variable. So what I mean by this is that if I look at this value here, if I say that phi is the CDF of N 0, 1, so the p-value is the probability that it exceeds this. So that's the probability that I'm either here or here. AUDIENCE: [INAUDIBLE] PROFESSOR: No, it's not, right? AUDIENCE: [INAUDIBLE] PROFESSOR: This is a big X and this is a small x. This is just where you plug in your data. The p-value is the probability that you have more evidence against your null than what you already have. OK, so now I can write it in terms of cumulative distribution functions. So this is what? This is phi of this guy, which is minus this thing here. Well it's basically 2 times this guy, phi of minus square root of n, Xn bar divided by sigma. That's my p-value. If you give me data, I'm going to compute the average and plug it in there, and it can spit out the p-value. Everybody agrees? So now I can view this, if I start now looking back I say, well, where does this data come from? Well, it could be a random variable. It came from the realization of this thing. So I can try to, I can think of this value, where now this is a random variable because I just plugged in a random variable in here. So now I view my p-value as a random variable. So I keep switching from small x to large X. Everybody agrees what I'm doing here? So I just wrote it as a deterministic function of some deterministic number, and now the function stays deterministic but the number becomes random. And so I can think of this as some statistic of my data. And I could say, well, what is the distribution of this random variable? Now if my data is actually normally distributed, so I'm actually under the null, so under the null, that means that Xn bar times square root of n divided by sigma has what distribution? Normal? Well it was sigma, I assume I knew it. So it's N 0, 1, right? I divided by sigma here. OK? So now I have this random variable. And so my random variable is now 2 phi of minus absolute value of a Gaussian. And I'm actually interested in the distribution of this thing. I could ask that. Anybody has an idea of how you would want to tackle this thing? If I ask you, what is the distribution of a random variable, how do you tackle this question? There's basically two ways. One is to try to find something that looks like the expectation of h of x for all h. And you try to write this using change of variables and something that looks like integral of h of x p of x dx. And then you say, well, that's the density. If you can read this for any h, then that's the way you would do it. But there's a simpler way that does not involve changing variables, et cetera, you just try to compute the cumulative distribution function. So let's try to compute the probability that 2 phi minus N 0, 1, is less than t. And maybe we can find something we know. OK. Well that's equal to what? That's the probability that a minus N 0, well let's say that an N 0, 1-- sorry, N 0, 1 absolute value is greater than minus phi inverse of t over 2. And that's what? Well, it's just the same thing that we had before. It's equal to-- so if I look again, this is the probability that I'm actually on this side or that side of this number. And this number is what? It's minus phi of t over 2. Why do I have a minus here? That's fine, OK. So it's actually not this, it's actually the probability that my absolute value-- oh, because phi inverse. OK. Because phi inverse is-- so I'm going to look at t between 0 and-- so this number is ranging between 0 and 1. So it means that this number is ranging between 0-- well, the probability that something is less than t should be ranging between the numbers that this guy takes, so that's between 0 and 2. Because this thing takes values between 0 and 2. I want to see 0 and 1, though. AUDIENCE: Negative absolute value is always less than [INAUDIBLE]. PROFESSOR: Yeah. You're right, thank you. So this is always some number which is less than 0, so the probability that the Gaussian is less than this number is always less than the probability it's less than 0, which is 1/2, so t only has to be between 0 and 1. Thank you. And so now for t between 0 and 1, then this guy is actually becoming something which is positive, for the same reason as before. And so that's what? That's just basically 2 times phi of phi inverse of t over 2. That's just playing with the symmetry a little bit. You can look at the areas under the curve. And so what it means is that those two guys cancel. This is the identity. And so this is equal to t. So which distribution has a density-- sorry, which distribution has a cumulative distribution function which is equal to t for t between 0 and 1? That's the uniform distribution, right? So it means that this guy follows a uniform distribution on the interval 0, 1. And you could actually check that. For any test you're going to come up with, this is going to be the case. Your p-value under the null will have a distribution which is uniform. So now if somebody shows up and says, here's my test, it's awesome, it just works great. I'm not going to explain to you how I built it, it's a complicated statistics that involve moments of order 27. And I'm like, OK, you know, how am I going to test that your test statistic actually makes sense? Well one thing I can do is to run a bunch of data, draw a bunch of samples, compute your test statistic, compute the p-value, and check if my p-value has a uniform distribution on the interval 0, 1. But for that I need to have a test that, given a bunch of observations, can tell me whether they're actually distributed uniformly on the interval 0, 1. And again one thing I could do is build a histogram and see if it looks like that of a uniform, but I could also try to be slightly more quantitative about this. AUDIENCE: Why does the [INAUDIBLE] have to be for a [INAUDIBLE]? PROFESSOR: For two tests? AUDIENCE: For each test. Why does the p-value have to be normal? I mean, uniform. PROFESSOR: It's uniform under the null. So because my test statistic was built under the null, and so I have to be able to plug in the right value in there, otherwise it's going to shift everything for this particular test. AUDIENCE: At the beginning while your probabilities were of big Xn, that thing. That thing is the p-value. PROFESSOR: That's the p-value, right? That's the definition of the p-value. AUDIENCE: OK. PROFESSOR: So it's the probability that my test statistic exceeds what I've actually observed. AUDIENCE: So how you run the test is basically you have your observations and plug them into the cumulative distribution function for a normal, and then see if it falls under the given-- PROFESSOR: Yeah. So my p-value is just this number when I just plug in the values that I observe here. That's one number. For every dataset you're going to give me, it's going to be one number. Now what I can do is generate a bunch of datasets of size n, like 200 of them. And then I'm going to have a new sample of say 200, which is just the sample of 200 p-values. And I want to test if those p-values have a uniform distribution. OK? Because that's the distribution they should be having. All right? OK. This one we've already seen. Does x have a PMF with 30%, 50%, and 20%? That's something I could try to test. That looks like your grade point distribution for this class. Well not exactly, but that looks like it. So all these things are known as goodness of fit tests. The goodness of fit test is something that you want to know if the data that you have at hand follows the hypothesized distribution. So it's not a parametric test. It's not a test that says, is my mean equal to 25 or not. Is my proportion of heads larger than 1/2 or not? It's something that says, my distribution this particular thing. So I'm going to write them as goodness of fit, G-O-F here. You don't need to have parametric modeling to do that. So how do I work? So if I don't have any parametric modeling, I need to have something which is somewhat non-parametric, something that goes beyond computing the mean and the standard deviation, something that computes some intrinsic non-parametric aspect of my data. And just like here we made this computation, what we did is we said well, if I actually check that the CDF of my data, that my p-value is uniform, then I know it's uniform. So it means that the cumulative distribution function has an intrinsic value about it that captures the entire distribution. Everything I need to know about my distribution is captured by the cumulative distribution function. Now I have an empirical way of computing, I have a data-driven way of computing an estimate for the cumulative distribution function, which is using the old statistical trick which consists of replacing expectations by averages. So as I said, the cumulative distribution function for any distribution, for any random variable, is-- so F of t is the probability that X is less than or equal to t, which is equal to the expectation of the indicator that X is less than or equal to t. That's the definition of a probability. And so here I'm just going to replace expectation by the average. That's my usual statistical trick. And so my estimator Fn for-- the distribution is going to be 1 over n sum from i equal 1 to n of these indicators. And this is called the empirical CDF. It's just the data version of the CDF. So I just replaced this expectation here by an average. Now when I sum indicators, I'm actually counting the number of them that satisfy something. So if you look at what this guy is, this is the number of X i's that is less than t, right? And so if I divide by n, it's the proportion of observations I have that are less than t. That's what the empirical distribution is. That's what's written here, the number of data points that are less than t. And so this is going to be something that's sort of trying to estimate one or the other. And the law of large number actually tells me that for any given t, if n is large enough, Fn of t should be close to F of t. Because it's an average. And this entire thing, this entire statistical trick, which consists of replacing expectations by averages, is justified by the law of large number. Every time we used it, that was because the law of large number sort of guaranteed to us that the average was close to the expectation. OK. So law of large numbers tell me that Fn of t converges, so that's the strong law, says that almost surely actually Fn of t goes to F of t. And that's just for any given t. Is there any question about this? That averages converge to expectation, that's the law of large number. And almost surely we could say in probability it's the same, that would be the weak law of large number. Now this is fine. For any given t, the average converges to the true. It just happens that this random variable is indexed by t, and I could do it for t equals 1 or 2 or 25, and just check it again. But I might want to check it for all t's at once. And that's actually a different result. That's called a uniform result. I want this to hold for all t at the same time. And it may be the case that it works for each t individually but not for all t's at the same time. What could happen is that for t equals 1 it converges at a certain rate, and for t equals 2 it converges at a bit of a slower rate, and for t equals 3 at a slower rate and slower rate. And so as t goes to infinity, the rate is going to vanish and nothing is going to converge. That could happen. I could make this happen at a finite point. There's many ways where it could make this happen. Let's see how that could work. I could say, well, actually no. I still need to have this at infinity for some reason. It turns out that this is still true uniformly, and this is actually a much more complicated result than the law of large number. It's called Glivenko-Cantelli Theorem. And the Glivenko-Cantelli Theorem tells me that, for all t's at once, Fn converges to F. So let me just show you quickly why this is just a little bit stronger than the one that we had. If sup is confusing you, think of max. It's just the max over an infinite set. And so what we know is that Fn of t goes to F of t as n goes to infinity. And that's almost surely. And that's the law of large numbers. Which is equivalent to saying that Fn of t minus F of t as n goes to infinity converges almost surely to 0, right? This is the same thing. Now I want this to happen for all t's at once. So what I'm going to do-- oh, and this is actually equivalent to this. And so what I'm going to do is I'm going to make it a little stronger. So here the arrow only goes one way. And this is where the sup for t in R of Fn of t. And you could actually show that this happens also almost surely. Now maybe almost surely is a bit more difficult to get a grasp on. Does anybody want to see, like why this statement for this sup is strictly stronger than the one that holds individually for all t's? You want to see that? OK, so let's do that. So forget about it almost surely for one second. Let's just do it in probability. The fact that Fn of t converges to F of t for all t, in probability means that this goes to 0 as n goes to infinity for any epsilon. For any epsilon in t we know we have this. That's the convergence in probability. Now what I want is to put a sup here. The probability that the sup is lower than epsilon, might be actually always larger than, never go to 0 in some cases. It could be the case that for each given t, I can make n large enough so that this probability becomes small. But then maybe it's an n of t. So this here means that for any-- maybe I shouldn't put, let me put a delta here. So for any epsilon, for any t and for any epsilon, there exists n, which could depend on both epsilon and t, such that the probability that Fn t minus F of t exceeding delta is less than epsilon t. There exists an n and a delta. No, that's for all delta, sorry. So this is true. That's what this limit statement actually means. But it could be the case that now when I take the sup over t, maybe that n of t is something that looks like t. Or maybe, well, integer part of t. It could be, right? I don't say anything. It's just an n that depends on t. So if this n is just t, maybe t over epsilon, because I want epsilon. Something like this. Well that means that if I want this to hold for all t's at once, I'm going to have to go for the n that works for all t's at once. But there's no such n that works for all t's at once. The only n that works is infinity. And so I cannot make this happen for all of them. What Glivenko-Cantelli tells you, it's actually this is not something that holds like this. That the n that depends on t, there's actually one largest n that works for all the t's at once, and that's it. OK. So just so you know why this is actually a stronger statement, and that's basically how it works. Any other question? Yeah. AUDIENCE: So what's the position for this to have, because the random variable have a finite mean, finite variance? PROFESSOR: No. Well the random variable does have finite mean and finite variance, because the random variable is an indicator. So it has everything you want. This is one of the nicest random variables, this is a Bernoulli random variable. So here when I say law of large number, that this holds. Where did I write this? I think I erased it. Yeah, the one over there. This is actually the law of large numbers for Bernoulli random variables. They have everything you want. They're bounded. Yes. AUDIENCE: So I'm having trouble understanding the first statement. So it says, for all epsilon and all t, the probability of that-- PROFESSOR: So you mean this one? AUDIENCE: Yeah. PROFESSOR: For all epsilon and all t. So you fix them now. Then the probability that, sorry, that was delta. I changed this epsilon to delta at some point. AUDIENCE: And then what's the second line? PROFESSOR: Oh, so then the second line says that, so I'm just rewriting in terms of epsilon delta what this n goes to infinity means. So it means that for any a t and delta, so that's the same as this guy here, then here I'm just going back to rewriting this. It says that for any epsilon there exists an n large enough such that, well, n larger than this thing basically, such that this thing is less than epsilon. So Glivenko-Cantelli tells us that not only is this thing a good idea pointwise, but it's also a good idea uniformly. And all it's saying is if you actually were happy with just this result, you should be even happier with that result. And both of those results only tell you one thing. They're just telling you that the empirical CDF is a good estimator of the CDF. Now since those indicators are Bernoulli distributions, I can actually do even more. So let me get this guy here. OK so, those guys, Fn of t, this guy is a Bernoulli distribution. What is the parameter of this Bernoulli distribution? What is the probability that it takes value 1? AUDIENCE: F of t. PROFESSOR: F of t, right? It's just the probability that this thing happens, which is F of t. So in particular the variance of this guy is the variance of this Bernoulli. So it's F of t 1 minus F of t. And I can use that in my Central Limit Theorem. And Central Limit Theorem is just going to tell me that if I look at the average of random variables, I remove their mean, so I look at square root of n Fn of t, which I could really write as xn bar, right? That's really just an xn bar. Minus the expectation, which is F of t, that comes from this guy. Now if I divide by square root of the variance, that's my square root p1 minus p. Then this guy, by the Central Limit Theorem, goes to some N 0, 1. Which is the same thing as you see there, except that the variance was put on the other side. OK. Do I have the same thing uniformly in t? Can I write something that holds uniformly in t? Well, if you think about it for one second it's unlikely it's going to go too well. In the sense that it's unlikely that the supremum of those random variables over t is going to also be a Gaussian. And the reason is that, well actually the reason is that this thing is actually a stochastic process indexed by t. A stochastic process is just a sequence in random variables that's indexed by, let's say time. The one that's the most famous is Brownian motion, and it's basically a bunch of Gaussian increments. So when you go from t to just t a little after that, you have add some Gaussian into the thing. And here it's basically the same thing that's happening. And you would sort of expect, since each of this guy is Gaussian, you would expect to see something that looks like a Brownian motion at the end. But it's not exactly a Brownian motion, it's something that's called the Brownian bridge. So if you've seen the Brownian motion, if I make it start at 0 for example, so this is the value of my Brownian motion. Let's write it. So this is one path, one realization of Brownian motion. Let's call it w of t as t increases. So let's say it starts at 0 and looks like something like this. So that's what Brownian motion looks like. It's just something that's pretty nasty. I mean it looks pretty nasty, it's not continuous et cetera, but it's actually very benign in some average way. So Brownian motion is just something, you should view this as if I sum some random variable that are Gaussian, and then I look at this from farther and farther, it's going to look like this. And so here I cannot have a Brownian motion in the n, because what is the variance of Fn of t minus F of t at t is equal to 1? Sorry, at t is equal to infinity. AUDIENCE: 0. PROFESSOR: It's 0, right? The variance goes from 0 at t is negative infinity, because at negative infinity F of t is going to 0. And as t goes to plus infinity, F of t is going to 1, which means that the variance of this guy as t goes from negative infinity to plus infinity is pinned to be 0 on each side. And so my Brownian motion cannot, when I describe a Brownian motion I'm just adding more and more entropy to the thing and it's going all over the place, but here what I want is that as I go back it should go back to essentially 0. It should be pinned down to a specific value at the n. And that's actually called the Brownian bridge. It's a Brownian motion that's conditioned to come back to where it started essentially. Now you don't need to understand Brownian bridges to understand what I'm going to be telling you. The only thing I want to communicate to you is that this guy here, when I say a Brownian bridge, I can go to any probabilist and they can tell you all the probability properties of this stochastic process. It can tell me the probability that it takes any value at any point. In particular, it can tell me-- the supremum between 0 and 1 of this guy, it could tell me what the cumulative distribution function of this thing is, can tell me what the density of this thing is, can tell me everything. So it means that if I want to compute probabilities on this object here, which is the maximum value that this guy can take over a certain period of time, which is basically this random variable. So if I look at the value here, it's a random variable that fluctuates. It can tell me where it is with hyperability, can tell me the quantiles of this thing, which is useful because I can build a table and use it to compute my quantiles and form tests from it. So that's what actually is quite nice. It says that if I look at the square root of n Fn hat minus sup over t, I get something that looks like the sup of these Gaussians, but it's not really sup of Gaussian, it's sup of a Brownian motion. Now there's something you should be very careful here. I cheated a little bit. I mean, I didn't cheat, I can do whatever I want. But my notation might be a little confusing. Everybody sees that this t here is not the same as this t here? Can somebody see that? Just because, first of all, this guy's between 0 and 1. And this guy is in all of R. What is this t here? As a function of this t here? This guy is F of this guy. So really, if I want it to be completely transparent and not save the keys of my keyboard, I would read this as sup over t of Fn t minus F of t goes to N distribution as n goes to infinity. The supremum over t, again in R, so this guy is for t in the entire real line, this guy is for t in the entire real line. But now I should write b of what? F of t, exactly. So really the t here is F of the original one. And so that's a Brownian bridge, where when t goes to infinity the Brownian bridge goes from 0 to 1 and it looks like this. A Brownian bridge at 0 is 0, at 1 it's 0. And it does this. But it doesn't stray too far because I condition it to come back to this point. That's what a Brownian bridge is. OK. So in particular, I can find a distribution for this guy. And I can use this to build a test which is called the Kolmogorov-Smirnov test. The idea is the following. It says, if I want to test some distribution F0, some distribution that has a particular CDF F0, and I plug it in under the null, then this guy should have pretty much the same distribution as the supremum of Brownian bridge. And so if I see this to be much larger than it should be when it's the supremum of a Brownian bridge, I'm actually going to reject my hypothesis. So here's the test. I want to test whether H0, F is equal to F0, and you will see that most of the goodness of fit tests are formulated mathematically in terms of the cumulative distribution function. I could formulate them in terms of personality density function, or just write x follows N 0, 1, but that's the way we write it. We formulate them in terms of cumulative distribution function because that's what we have a handle on through the empirical cumulative distribution function. And then it's versus H1, F is not equal to F0. So now I have my empirical CDF. And I hope that for all t's, Fn of t should be close to F0 of t. Let me write it like this. I put it on the exponent because otherwise that would be the empirical distribution function based on zero observations. Now I form the following test statistic. So my test statistic is tn, which is the supremum over t in the real line of square root of n Fn of t minus F of t, sorry, F0 of t. So I can compute everything. I know this from the data, and this is the one that comes from my null hypothesis. As I can compute this thing. And I know that if this is true, this should actually be the supremum of a Brownian bridge. Pretty much. And so the Kolmogorov-Smirnov test is simply, reject if this guy, tn, in absolute value, no actually not in absolute value. This is just already absolute valued. Then this guy should be what? It should be larger than the q alpha over 2 distribution that I have. But now rather than putting N 0, 1, or Tn, this is here whatever notation I have for supremum of Brownian bridge. Just like I did for any pivotal distribution. That was the same recipe every single time. I formed the test statistic such that the asymptotic distribution did not depend on anything I know, and then I would just reject when this pivotal distribution was larger than something. Yes? AUDIENCE: I'm not really sure why Brownian bridge appears. PROFESSOR: Do you know what a Brownian bridge is, or? AUDIENCE: Only vaguely. PROFESSOR: OK. So this thing here, think of it as being a Gaussian. So for all t you have a Gaussian distribution. Now a Brownian motion, so if I had a Brownian motion I need to tell you what the-- so it's basically a Brownian motion is something that looks like this. It's some random variable that's indexed by t. I want, say, the expectation of Xt could be equal to 0 for all t. And what I want is that the increments have a certain distribution. So what I want is that the expectation of Xt minus Xs follows some distribution which is N 0, t minus s. So the increments are bigger as I go farther, in terms of variability. And I also want some covariance structure between the two. So what I want is that the covariance between Xs and Xt is actually equal to the minimum of s and t. Yeah, maybe. Yeah, that should be there. So this is, you open a probability book, that's what it's going to look like. So in particular, you can see, if I put 0 here and X0 is equal to 0, it has 0 variance. So in particular, it means that Xt, if I look only at the t-th one, it has some normal distribution with variance t. So this is something that just blows up. So this guy here looks like it's going to be a Brownian motion because when I look at the left-hand side it has a normal distribution. Now there's a bunch of other things you need to check. It's the fact that you have this covariance, for example, which I did not tell you. But it sure look somewhat like that. And in particular, when I look at the normal with mean 0 and variance here, then it's clear that this guy does not have a variance that's going to go to infinity just like the variance of this guy. We know that the variance is forced to be back to 0. And so in particular we have something that has mean 0 always, whose variance has to be 0 at 0, and variance-- sorry, at t equals negative infinity, and variance 1 at t equals plus infinity. So a variance 0 at t equals plus infinity, and so I have to basically force it to be equal to 0 at each n. So the Brownian motion here tends to just go to infinity somewhere, whereas this guy forces it to come back. Now everything I described to you is on the scale negative infinity to plus infinity, but since everything depends on F of t, I can actually just put that back into a scale, which is 0 and 1 by a simple change of variable. It's called change of time for the Brownian motion. OK? Yeah. AUDIENCE: So does a Brownian bridge have a variance at each point that's proportional? Like it starts at 0 variance and then goes to 1/4 variance in the middle and then goes back to 0 variance? Like in the same parabolic shape? PROFESSOR: Yeah. I mean, definitely. I mean by symmetry you can probably infer all the things. AUDIENCE: Well I can imagine Brownian bridge with a variance that starts at 0 and stays, like, the shape of the variance as you move along. PROFESSOR: Yeah, so I don't know if-- there is an explicit formula for this, and it's simple. That's what I can tell you, but I don't know what the explicit, off the top of my head what the explicit formula is. AUDIENCE: But would it have to match this F of t 1 minus F of t structure? Or not? PROFESSOR: Yeah. AUDIENCE: Or does the fact that we're taking the supremum-- PROFESSOR: No. Well the Brownian bridge, this is the supremum-- you're right. So this will be this form for the variance for sure, because this is only marginal distributions that don't take-- right, the process is not just what is the distribution at each instant t. It's also how do those distributions interact with each other in terms of covariance. For the marginal distributions at each instance t, you're right, the variance is F of t 1 minus F of t. We're not going to escape that. But then the covariance structure between those guys is a little more complicated. But yes, you're right. For marginal that's enough. Yeah? AUDIENCE: So the supremum of the Brownian bridge is a number between 0 and 10, let's just say. PROFESSOR: Yeah, it could be infinity. AUDIENCE: So it's not symmetrical with respect to 0, so why are we doing all over 2? PROFESSOR: OK. Did say raise it? Yeah. Because here I didn't say the supremum of the absolute value of a Brownian bridge, I just said the supremum of a Brownian bridge. But you're right, let's just do this like that. And then it's probably cleaner. So yeah, actually well it should be q alpha. So this is basically, you're right. So think of it as being one-sided. And there's actually no symmetry for the supremum. I mean the supremum is not symmetric around 0, so you're right. I should not use alpha over 2, thank you. Any other question? This should be alpha. Yeah. I mean those slides were written with 1 minus alpha and I have not replaced all instances of 1 minus alpha by alpha. I mean, except this guy, tilde. Well, depends on how you want to call it. But this is still, the probability that Z exceeds this guy should be alpha. OK? And this can be found in tables. And we can compute the p-value just like we did before. But we have to simulate it because it's not going to depend on the cumulative distribution function of a Gaussian, like it did for the usual Gaussian test. That's something that's more complicated, and typically you don't even try. You get the statistical software to do it for you. So just let me skip a few lines. This is what the table looks like for the Kolmogorov-Smirnov test. So it just tells you, what is your number of observations, n. Then you want alpha to be equal to 5%, say. Let's say you have nine observations. So if square root of n absolute value of Fn of t minus F of t exceeds this thing, you reject. Well it's pretty clear from this test is that it looks very nice, and I tell you this is how you build it. But if you think about it for one second, it's actually really an annoying thing to build because you have to take the supremum over t. This depends on computing a supremum, which in practice might be super cumbersome. I don't want to have to compute this for all values t and then to take the maximum of those guys. It turns out that that's actually quite nice that we don't have to actually do this. What does the empirical distribution function look like? Well, this thing, remember Fn of t by definition was-- so let me go to the slide that's relevant. So Fn of t looks like this. So what it means is that when t is between two observations, then this guy is actually keeping the same value. So if I put my observations on the real line here. So let's say I have one observation here, one observation here, one observation here, one observation here, and one observation here, for simplicity. Then this guy is basically, up to this normalization, counting how many observations they have that are less than t. So since I normalize by n, I know that the smallest number here is going to be 0, and the largest number here is going to be 1. So let's say this looks like this. This is the value 1. At the value, since I take it less than or equal to, when I'm at Xi, I'm actually counting it. So the jump happens at Xi. So that's the first observation, and then I jump. By how much do I jump? Yeah? One over n, right? And then this value belongs to the right. And then I do it again. I know it's not going to work out for me, but we'll see. Oh no actually, I did pretty well. This is what my cumulative distribution looks like. Now if you look on this slide, there is this weird notation where I start putting now my indices in parentheses. X parenthesis 1, X parenthesis 2, et cetera. Those are called the ordered statistic. It's just because it might be, when my data is given to me I just call the first observation, the one that's on top of the table, but it doesn't have to be the smallest value. So it might be that this is X1 and that this is X2, and then this is X3, X4, and X5. These might be my observations. So what I do is that I call them in such a way that this is actually, I recall this guy X1, which is just really X3. This is X2, X3, X4, and X5. These are my reordered observations in such a way that the smallest one is indexed by one and the largest one is indexed by n. So now this is actually quite nice, because what I'm trying to do is to find the largest deviation from this guy to the true cumulative distribution function. The true cumulative distribution function, let's say it's Gaussian, looks like this. It's something continuous, for a symmetric distribution it crosses this axis at 1/2, and that's what it looks like. And the Kolmogorov-Smirnov test is just telling me how far do those two curves get in the worst possible case? So in particular here, where are they the farthest? Clearly that's this point. And so up to rescaling, this is the value I'm going to be interested in. That's how they get as far as possible from each other. Here, something just happened, right? The farthest distance that I got was exactly at one of those dots. It turns out this is enough to look at those dots. And the reason is, well because after this dot and until the next jump, this guy does not change, but this guy increases. And so the only point where they can be the farthest apart is either to the left of a jump or to the right of a jump. That's the only place where they can be far from each other. And that means that only one observation. Everybody sees that? The farthest points, the points at which those two curves are the farthest from each other, has to be at one of the observations. And so rather than looking at a sup over all possible t's, really all I need to do is to look at a maximum only at my observations. I just need to check at each of those points whether they're far. Now here, notice that you did not, this is not written Fn of Xi. The reason is because I actually know what Fn of Xi is. Fn of the i-th order observation is just the number of jumps I've had until this observation. So here, I know that the value of Fn is 1 over n, here it's 2 over n, 3 over n, 4 over n, 5 over n. So I knew that the values of Fn at my observations, and those are actually the only values that Fn can take, are an integer divided by n. And that's why you see i minus 1 over n, or i over n. This is the difference just before the jump, and this is the difference at the jump. So here the key message is that this is no longer a supremum over all t's, but it's just the maximum from 1 to n. So I really have only two n values to compute. This value and this value for each observation, that's 2n total. I look at the maximum and that's actually the value. And it's actually equal to tn. It's not an approximation. Those things are equal. That's just the only places where those guys can be maximum. Yes? AUDIENCE: It seems like since the null hypothesis [INAUDIBLE] the entire distribution of theta, this is like strictly more powerful than just doing it [INAUDIBLE]. PROFESSOR: It's strictly less powerful. AUDIENCE: Strictly less powerful. But is there, is that like a big trade-off that we're making when we do that? Obviously we're not certain in the first place that we want to assume normality. Does it make sense to [INAUDIBLE],, the Gaussian [INAUDIBLE]. PROFESSOR: So can you, I'm not sure what question you're asking. AUDIENCE: So when we're doing a normal test, we're just asking questions about the mus, the means of our distribution. [INAUDIBLE] This one, it seems like it would be both at the same time. [INAUDIBLE] Is this decreasing power [INAUDIBLE]?? PROFESSOR: So remember, here in this test we want to conclude to H0, in the other test we typically want to conclude to H1. So here we actually don't want power, in a way. And you have to also assume that doing a test on the mean is probably not the only thing you're going to end up doing on your data after you actually establish that it's normally distributed. Then you have the dataset, you've sort of established it's normally distributed, and then you can just run the arsenal of statistical studies. And we're going to see regression and all sorts of predictive things, which are not just tests if the mean is equal to something. Maybe you want to build a confidence interval for the mean. Then this is not, confidence interval is not a test. So you're going to have to first test if it's normal, and then see if you can actually use the quantiles of a Gaussian distribution or a t distribution to build this confidence interval. So in a way you should do this as like, the flat fee to enter the Gaussian world, and then you can do whatever you want to do in the Gaussian world. We'll see actually that your question goes back to something that's a little important, is here I said F0 is fully specified. It's like an N 1, 5. But I didn't say, is it normally distributed, which is the question that everybody asks. You're not asking, is it this particular normal distribution with this particular mean and this particular variance. So how would you do it in practice? Well you would say, I'm just going to replace the mean by the empirical mean and the variance by the empirical variance. But by doing that you're making a huge mistake because you are sort of depriving your test of the possibility to reject the Gaussian hypothesis just based on the fact that the mean is wrong or the variance is wrong. You've already stuck to your data pretty well. And so you're sort of like already tilting the game in favor of H0 big time. So there's actually a way to arrange for this. OK, so this is about pivotal statistic. We've used this word many times. And So that's how. I'm not going to go into this test. It's really, this is a recipe on how you would actually build the table that I showed you, this table. This is basically the recipe on how to build it. There's another recipe to build it, which is just open a book at this page. That's a little faster. Or use software. I just wanted to show you. So let's just keep in mind, anybody has a good memory? Let's just keep in mind this number. This is the threshold for the Kolmogorov-Smirnov statistic. If I have 10 observations and I want to do it at 5%, it's about 41%. So that's the number that it should be larger from. So it turns out that if you want to test if it's normal, and not just the specific normal, this number is going to be different. Do you think the number I'm going to read in a table that's appropriate for this is going to be larger or smaller? Who says larger? AUDIENCE: Sorry, what was the question? PROFESSOR: So the question is, this is the number I should see if my test was, is X, say, N 0, 5. Right? That's a specific distribution with a specific F0. So that's the number, I would build the Kolmogorov-Smirnov statistic from this. I would perform a test and check if my Kolmogorov-Smirnov statistic tn is larger than this number or not. If it's larger I'm going to reject. Now I say, actually, I don't want to test if H0 is N 0, 5, but it's just a mu sigma squared for some mu and sigma squared. And in particular I'm just going to plugin mu hat and sigma hat into my F0, run the same statistic, but compare it to a different number. So the larger the number, the more or less likely am I to reject? The less likely I am to reject, right? So if I just use that number, let's say this is a large number, I would be more tempted to say it's Gaussian. And if you look at the table you would get that if you make the appropriate correction at the same number of observations, 10, and the same level, you get 25% as opposed to 41%. That means that you're actually much more likely if you use the appropriate test to reject the fact that it's normal, which is bad news, because that means you don't have access to the Gaussian arsenal, and nobody wants to do this. So actually this is a mistake that people do a lot. They use the Kolmogorov-Smirnov test to test for normality without adjusting for the fact that they've plugged in the estimated mean and the estimated variance. This leads to rejecting less often, right? I mean this is almost half of the number that we had. And then they can be happy and walk home and say, well, I did the test and it was normal. So this is actually a mistake that I believe that genuinely at least a quarter of the people do make in purpose. They just say, well I want it to be Gaussian so I'm just going to make my life easier. So this is the so-called Kolmogorov Lilliefors test. We'll talk about it, well not today for sure. There's other statistics that you can test, that you can use. And the idea is to say, well, we want to know if the empirical distribution function, the empirical CDF, is close to the true CDF. The way we did it is by forming the difference in looking at the worst possible distance they can be. That's called a sup norm, or L infinity norm, in functional analysis. So here, this is what it looked like. The distance between Fn and F that we measured was just the supremum distance over all t's. That's one way to measure distance between two functions. But there's an infinite many ways to measure distance between functions. One is something we're much more familiar with, which is the squared L2-norm. This is nice because this has like an inner product, it has some nice properties. And you could actually just, rather than taking the sup, you could just integrate the squared distance. And this is what leads to Cramier-Von Mises test. And then there's another one that says, well, maybe I don't want to integrate without weights. Maybe I want to put weights that account for the variance. And this guy is called Anderson-Darling. For each of these tests you can check that the asymptotic distribution is going to be pivotal, which means that there will be a table at the back of some book that tells you what the statistic, the quantiles of square root of n times this guy are asymptotically, basically. Yeah? AUDIENCE: For the Kolmogorov-Smirnov test, for the table that shows the value it has, it has the value for different n. But I thought we [INAUDIBLE]-- PROFESSOR: Yeah. So that's just to show you that asymptotically it's pivotal, and I can point you to one specific thing. But it turns out that this thing is actually pivotal for each n. And that's why you have this recipe to construct the entire thing, because it's actually not true for all possible n's. Also there's the n that shows up here. So no actually, this is something you should have in mind. So basically, let me strike what I just said. This thing you can actually, this distribution will not depend on F0 for any particular n. It's just not going to be a Brownian bridge but a finite sample approximation of a Brownian bridge, and you can simulate that just drawing samples from it, building a histogram, and constructing the quantiles for this guy. AUDIENCE: No one has actually developed a table for Brownian-- PROFESSOR: Oh, there is one. That's the table, maybe. Let's see if we see it at the bottom of the other table. Yeah. See? Over 40, over 30. So this is not the Kolmogorov-Smirnov, but that's the Kolmogorov Lilliefors. Those numbers that you see here, they are the numbers for the asymptotic thing which is some sort of Brownian bridge. Yeah? AUDIENCE: Two questions. If I want to build the Kolmogorov-Smirnov test, it says that F0 is required to be continuous. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] If we have, like, probability mass of a particular value. Like some sort of data. PROFESSOR: So then you won't have this nice picture, right? This can happen at any point because you're going to have discontinuities in F and those things can happen everywhere. And then-- AUDIENCE: Would the supremum still work? PROFESSOR: You mean the Brownian bridge? AUDIENCE: Yeah. The Kolmogorov test doesn't say that you have to be able to easily calculate the supremum. PROFESSOR: No, no, no, but you still need it. You still need it for-- so there's some finite sample versions of it that you can use that are slightly more conservative, which is in a way good news because you're going to conclude more to H0. And there's are some, I forget the name, it's Kiefer-Wolfowitz, the Kiefer-Dvoretzky-Wolfowitz, an equality which is basically like Hoeffding's inequality. So it's basically up to bad constants telling you the same result as the Brownian bridge result, and those are true all the time. But for the exact asymptotic distribution, you need continuity. Yes. AUDIENCE: So just a clarification. So when we are testing the Kolmogorov, we shouldn't test a particular mu and sigma squared? PROFESSOR: Well if you know what they are you can use Kolmogorov-Smirnov, but if you don't know what they are you're going to plug in-- as soon as you're going to estimate the mean and the variance from the data, you should use the one we'll see next time, which is called Kolmogorov Lilliefors. You don't have to think about it too much. We'll talk about it on Thursday. Any other question? So we're out of time. So I think we should stop here, and we'll resume on Thursday.
MIT_18650_Statistics_for_Applications_Fall_2016
23_Generalized_Linear_Models_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at oct.mit.edu. PHILIPPE RIGOLLET: All right. So let's continue talking about maximum likelihood estimation in the context of generalized linear models, all right? So in those generalized linear models, what we spent most of the past lectures working on is the conditional distribution of Y given X. And we're going to assume that this follows some distribution in the exponential family. OK. And so what it means is that if we look at the density, say-- or the PMF, but let's talk about density to make things clearer-- we're going to assume that Y given X has distribution. So X is now fixed, because we're conditioning on it. And it has a density, which is of this form, c of Yi phi. OK. So this c, again, we don't really need to think about it. This is something that's going to come up naturally as soon as you need normalization factor. And so here what it means, if this is the distribution of Y given Xi, so that's the density of Yi given Xi is equal to little xi. So if it's the conditional distribution of Yi given Xi, it should depend on xi somehow. And it does not appear to depend on Xi. And here, the model is going to be on theta i, which is just a function, theta i of Xi. And we're going to take a very specific one. It's going to be a function of a linear form of the Xi. So really we're going to take something which is of the form theta I, which is really just-- as theta does not depend on Xi-- of Xi transposed from beta. OK? So all these parts here, this is really some modeling assumptions that we're making once we've agreed on what distribution we want. OK. So to do that, our goal, of course, is going to try to understand what this beta is. There's one beta here. What's important is that this beta does not depend on i. So if they observe pairs Xi, Yi-- let's say I observe n of them, i equals 1 to n-- the hope is that as a accumulate more and more pairs of this form where there's always the same parameter that links Xi to Yi, that's this parameter beta, that I should have a better and better estimation of this beta. Because it's always the same. And that's essentially with couples all of our distribution. If I did not assume this, then I could have a different distribution for each pair Xi given YI. And I would not be able to do any statistics. Nothing would average in the end. But here I have the same beta, which means that I can hope to do statistics and average errors in them. OK. So I'm going to collect, so I'll come back to this. But as usual in the linear regression model, we're going to collect all our observations Yi. So I'm going to assume that they're real valued and that my Xi's takes value in Rp just like in the regression model. And I'm going to collect all my Yi's into one big vector of Y in our n and all my X's into one big matrix in Rn times p just like for the linear regression model. All right, so, again, what I'm interested in here is the conditional distribution of Yi given Xi. OK. I said this is this distribution. When we're talking about regression, I defined last time what the definition of regression function was. It's just one particular aspect of this conventional distribution. It's the conditional expectation of Yi given Xi. OK. And so this conditional expectation, I will denote it by-- so I talk about the conditional, I'm going to call it, say, mu i, which is the conditional expectation of Yi given Xi equals some little xi, say. You can forget about this part if you find it confusing. It really doesn't matter. It's just that this means that this is a function of little xi. But if I only had the expectation of Yi given big Xi, this would be just a function of big Xi. So it really doesn't change anything. It's just a matter of notation. OK. So just forget about this part. But I'll just do it like that here. OK. So this is just the conditional expectation of Yi given Xi. It just depends on Xi, so I think it depends on i, and so I will call it mu i. But I know that since in a canonical exponential family, then I know that mu i is actually B prime of theta i. OK. So there's a 1 to 1 link between the canonical parameter of my exponential family and the mean mu i, the conditional expectation. And the modeling assumption we're going to make is not directly-- remember, that was the second aspect of the generalized linear model. We're not going to assume that theta i itself directly depends on Xi. We're going to assume that mu i has a particular dependence on Xi through the link function. So, again, we're back to modeling. So we have a link function g. And we assume that mu i depends on Xi as follows. g of mu i-- and remember, all g does for us is really map the space in which mu i lives, which could be just the interval 0, 1 to the entire real line, all right? And we're going to assume that this thing that lives in the real line is just Xi transpose beta. I should maybe put a small one, Xi transpose beta. OK? So we're making, indeed, some modeling assumption. But compared to in the linear regression model, we only assume that mu i was Xi transpose beta. So if you want to make a parallel between generalized linear models and linear model is the only difference is that g is not the identity necessarily in this case. And all the g does for us is to just make this thing compatible, that those two things on the left and the right of equality live in the same space. So in a way, we're not making a much bigger leap of faith by assuming a linear model. The linear link is already here. We're just making things compatible, all right? And so it's always the same link function. So now if I want to go back to beta-- right, because I'm going to want to express my likelihood-- if I were to express my likelihood from this, it would just be a function of theta, right? And so if I want to maximize my likelihood, I don't want to maximize it in theta. I want to maximize it in beta. So if I can write my density as a function of beta, then I will be able to write my likelihood as a function of beta, and then talk about my maximum likelihood estimator. And so all they need to do is to just say, OK, how do I replace theta by-- I know that theta is a function of beta, right? I wrote it here. So the question is, what is this function? And I actually have access to all of this. So what I know is that theta-- right, so mu is b prime of theta, which means that theta i is b prime inverse of mu i. OK. So that's what we've got from this derivative of the log likelihood equal to 0. That give us this guy inverted. And now I know that mu i is g inverse of Xi beta. So this composition of b prime inverse and g inverse is actually just the composition of g with b prime. Everybody's comfortable with this notation, the little circle? Any question about this? It just means that I first applied b prime. Well, actually, it's D inverse. But if I look at a function g composed with b prime, I first applied the g b prime of x, is just g of b prime of x. OK. And then I take the inverse of this function, which is first take g inverse, and then take b prime inverse. OK. So now I have everywhere I saw theta, now I see this function of beta. So I could technically plug that in. Of course, it's a little painful to have to write g circle beta prime all the time. So I'm going to give this guy a name. And so you're just going to define h, which is g b prime inverse so that theta i is simply h of Xi transpose beta. OK. I could give it a name, you know. But let's just call that the h function. And something which is nice about this h function is that if g is the canonical link-- what is the canonical link? So what is it canonical to? A canonical link, it's canonical to a particular distribution in the canonical exponential family, right? A canonical exponential family is completely characterized by the function b. Which means that if I want to talk about the canonical link, all I need to tell you is how it depends on b. So what is g as a function of b? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: b inverse. b prime inverse, right? So this is g is equal to b prime inverse, which means that if g is composed with b prime that means that this is just the identity. So h is the identity. So h of Xi transpose beta is simply Xi transpose beta. And it's true that the way we introduce the canonical link was just the function for which we model directly theta i as Xi transpose beta, which we can read off here, right? So theta i is simply Xi transpose beta. So now, for example, if I go back to my log-likelihood, so if I look log-likelihood, the log-likelihood is sum of the log of the densities. So it's sum from i equal 1 to n of log of exponential Yi theta i minus b theta i divided by phi plus c of Yi phi. So this term does not depend on theta. So I have two things. First of all, the log and the exponential are going to cancel each other. And second, I actually know that theta is just a function of beta. And it has this form. Theta i is h of Xi transpose beta. And that's my modeling assumption. So this is actually equal to the sum from i equal 1 to n of Yi. And then here I'm going to write h of Xi transpose beta minus b of h of Xi transpose beta divided by phi. And then I have, again, this function c of Yi phi, which again won't matter. Because when I'm going to try to maximize this thing, this is just playing the role of a constant that's shifting the entire function. In particular, your max is going to be exactly what it was. OK? So this thing is really not going to matter for me. I'm keeping track of it. And actually, if you look here, it's gone, right? It's gone, because it does not matter. So let's just pretend it's not here, because it won't matter when I'm trying to maximize the likelihood. OK? While it's here up to constant term, it says. That's the constant term. All right, any question? All I'm doing here is replacing my likelihood as a function of theta i's. So if I had one theta i per observation, again, this would not help me very much. But if I assume that they are all linked together by saying that theta i is of the form Xi transpose beta or h of Xi transpose beta if I'm not using the canonical link, then I can hope to make some estimation. And so, again, if I have the canonical link, h is the identity. So I'm left only with Yi times Xi transpose beta. And then I have b of Xi transpose beta and not b composed with h, because h is the identity, which is fairly simple, right? Why is it simple? Well, let's actually focus on this guy for one second. So let me write it down, so we know what we're talking about. So we just showed that the log-likelihood when I use the canonical link-- so that h is equal to the identity, the log-likelihood actually takes the form ln. And it depends on a bunch of stuff. But let's just make it depend only on the parameter that we care about, which is beta, all right? So this is of the form l of beta and that's equal to what? It's the sum from i equal 1 to n of Yi Xi transpose beta minus-- let me put the phi here. And then I'm going to have minus b of Xi transpose beta. OK. And phi we know is some known positive term. So again, optimizing a function plus some constant or optimizing of function time as a constant, that's not going to change much either. So it won't really matter to think about whether this phi is here or not. But let's just think about what this function looks like. I'm trying to maximize a function. I'm trying to maximize a log-likelihood. If it looked like this, that would be a serious problem. But we can do like a basic, you know, back of the envelope guess of what the variations of this function is. This first term here is-- as a function of beta, what kind of function is it? AUDIENCE: Linear. PHILIPPE RIGOLLET: It's linear, right? This is just Xi transpose beta. If I multiply beta by 2, I get twice. If I add something to beta, it just gets added, so it's a linear function of beta. And so this thing is both convex and concave. In the one-dimensional case-- so think about p as being one-dimensional-- so if beta is a one-dimensional thing, those are just the function that looks like this, right? Those are linear functions. They are both convex and concave. So this is not going to matter when it comes to the convexity of my overall function, because I'm just adding something which is just a line. And so if I started with convex, it's going to stay convex. If I started with concave, it's going to stay concave. And if I started with something which is both, it's going to stay both, meaning neither. It cannot be both. Yeah. So if you're neither convex or concave, adding this linear-- so this will not really matter. If I want to understand when my function looks like, I need to understand would b of Xi transpose beta does. Begin, the Xi transpose beta-- no impact. It's a linear function. In terms of convexity, it's not going to play any role. So I really need to understand what my function b looks like. What do we know about b again? So we know that b prime of theta is equal to mu, right? Well, the mean of a random variable in a canonical exponential family can be a positive or negative number. This really does not tell me anything. That can be really anything. However, if I look at the second the derivative of b, I know that this is what? This is the variance of Y divided by phi. That was my dispersion parameter. The variance was equal to phi times b prime prime. So we know that if theta is not degenerate, meaning that the density does not take value infinity at only one point, this thing is actually positive. And clearly, when you have something that looks like this, unless you have some crazy stuff happening with phi being equal to 0 or anything that's not normal, then you will see that you're not degenerate. So this think is strictly positive. And we've said several times that if b prime prime is positive, then that means that's the derivative of b prime, meaning that b prime is increasing. And b prime is increasing is just the same thing as saying that b is convex, All right? So that implies that b is strictly convex. And the strictly comes from the fact that this is a strict sign. Well, I should not do that, because now it's no longer. So it's just a strict sign, meaning that the function, this is not strictly convex, because it's linear. Strictly convex means there's always some curvature everywhere. So now I have this thing that's linear minus something that's convex. Something that's negative, something convex, is concave. So this thing is linear plus concave. So it is concave. So I know just by looking at this that ln of beta, which, of course, is something that lives in Rp, but if I saw it living in R1 it would look like this. And if I saw it living in R2, it would look like a dome like this. And the fact that it's strict is also telling me that it is actually a unique maximizer. So there's unique maximizer in Xi transpose beta, but not in beta necessarily. We're going to need extra assumptions for this. OK. So this is what I say here. The log-likelihood is strictly concave. And so as a consequence under extra assumptions on the Xi is because, of course, if the Xi's are all the same, right? So if the entries of Xi's-- so if Xi is equal to 1, 1, 1, 1, 1, then Xi transpose beta is just the sum of the betas. And of the beta i's, I will be strictly concaving those guys, but certainly not in the individual entries. OK. So I need extra thing on my Xi, so that this happens, just like we needed the matrix capital X in the linear regression case to be a full rank, so we could actually identify would beta was. OK. It's going to be exactly the same thing. So here, this is when we have this very specific parametrization. And the question is-- but it may not be the case if we change the parameter beta into something else. OK. So here, the fact that we use the canonical link, et cetera, everything actually works really to our advantage, so that everything becomes strictly concave. And we know exactly what's happening. All right, so I understand I went a bit fast on playing with convex and concave functions. This is not the purpose. You know, I could spend a lecture telling you, oh, if I add two concave functions, then the result remains concave. If I had a concave and a strictly concave, then the result still remains strictly concave. And we could spend time proving this. This was just for you to get an intuition as to why this is correct. But we don't really have time to go into too much detail. One thing you can do-- a strictly concave function, if it's in one dimension, all I need to have is that the second derivative is strictly negative, right? That's a strictly concave function. That was the analytic definition we had for strict concavity. So if this was in one dimension, it would look like this, Yi times Xi times beta. Now, beta is just one number. And then I would have minus beta Xi times b. And this is all over phi. You take second derivatives. The fact that this is linear in beta, this is going to go away. And here, I'm just going to be left with minus-- so if I take the second derivative with respect to beta, this is going to be equal to minus b prime prime Xi beta times Xi squared divided by phi. So this is clearly positive. If Xi is 0, this is degenerate, so I would not get it. Then I have the second derivative of b prime, which I know is positive, because of the variance thing that I have here, divided by phi. And so that would all be fine. That's for one dimension. If I wanted to do this in higher dimensions, I would have to say that the Hessian is a positive definite matrix. And that's maybe a bit beyond what this course is. So in the rest of this chapter, I will do what I did not do when we talked about maximum likelihood. And what we're going to do is we're going to actually show how to do this maximization, right? So here, we know that the function is concave. But what it looks like specifically depends on what b is. And for different b's, I'm going to have different things to do, Just like when I was talking about maximum likelihood estimation, if it had a concave log-likelihood function, I could optimize it. But depending on what the function is, I would actually need some algorithms that may be working better on some functions than others. Now, here I don't have random things. I have the b is the cumulant generating function of a canonical exponential family. And there is a way for me to sort of leverage that. So not only is there the b part, but there's also the linear part. And if I start trying to use that, I'm actually going to be able to devise very specific optimization algorithms. And the way I'm going to be able to do this is by thinking of simple black box optimization to which I can actually technically feed any function. But it's going to turn out that the iterations of this iterative algorithms are going to look very familiar when we just plug in the particular values of b, of the log-likelihood that we have for this problem. And so the three methods we're going to talk about going from more black box-- meaning you can basically stuff in any function that's going to work, any concave function that's going to work, all the way to this is working specifically for generalized linear models-- are Newton-Raphson method. Who's already heard about the Newton-Raphson method? So there's probably some people actually learned this algorithm without even knowing the word algorithm, right? It's a function. Typically, it's supposed to be finding roots of functions. But finding the root of a function of the derivative is the same as finding the minimum of a function. So that's the first black box method. I mean, it's pretty old. And then there's something that's very specific to what we're doing, which is called-- so this Newton-Raphson method is going to involve the Hessian of our log-likelihood. And since we know something about the Hessian for a particular problem, we're going to be able move one to Fisher-scoring. And the word Fisher here is actually exactly coming from Fisher information. So the Hessian is going to involve the Fisher information. And finally, we will talk about iteratively re-weighted least squares. And that's not for any function. It's really when we're trying to use the fact that there is this linear dependence on the Xi. And this is essentially going to tell us, well, you know, you can use least squares for linear regression. Here, you can use least squares, but locally, and you have to iterate. OK. And this last part is essentially a trick by statisticians to be able to solve the Newton-Raphson updates without actually having a dedicated software for this, but just being able to reuse some least squares software. OK. So you know, we've talked about this many times. I just want to make sure that we're all on the same page here. We have a function f. We're going to assume that it has two derivatives. And it's a function from Rm to R. So it's fist derivative is called gradient. That's the vector that collects all the partial derivatives with respect to each of the coordinates. It's dimension m, of course. And the second derivative is an m by m matrix. It's called the Hessian. And ith row and jth column, you see the second partial derivative with respect to the ith component and the jth component. OK. We've seen that several times. This is just multi-variable calculus. But really the point here is to maybe the notation is slightly different, because I want to keep track of f. So when I write the gradient, I write nabla sub f. And when I write Hessian, I write nabla H sub f. And as I said, if f is strictly concave, then Hf of x is negative definite. What it means is that if I take any x in Rm, then x transpose Hf, well, that's for any X0 X, this is actually strictly negative. That's what it means to be negative definite. OK? So every time I do x transpose-- so this is like a quadratic form. And I want it to be negative for all values of X0 and X, both of them. That's very strong, clearly. But for us, actually, this is what happens just because of the properties of b. Well, at least the fact that it's negative, less than or equal to, if I want it to be strictly less I need some properties on X. And then I will call the Hessian map the function that maps X to this matrix Hf of X. So that's just the second derivative at x. Yeah. AUDIENCE: When you what are [INAUDIBLE]?? PHILIPPE RIGOLLET: Where do [INAUDIBLE]?? Oh, yeah. I mean, you know, you need to be able to apply Schwarz lemma. Let's say two continue derivatives that's smooth. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: No, that's fine. OK. So how does the Newton-Raphson method work? Well, what it does is that it forms a quadratic approximation to your function. And that's the one it optimizes at every single point. OK. And the reason is because we have a closed-form solution to defining the minimum of a quadratic function. So if I give you a function that's of the form ax squared plus b x plus c, you know exactly a closed form for its minimum. But if I give you any function or, let's say-- yeah, yeah. So here, it's all about maximum. I'm sorry. If you're confused with me using the word minimum, just assume that it was the word maximum. So this is how it works. OK. If I give you a function which is concave, that's quadratic. OK. So it's going to look like this. So that's of the form ax squared-- where a is negative, of course-- plus bx plus c. Then you can solve your whatever. You can take the derivative of this guy, set it equal to 0, and you will have an exact equation into what the value of x is that it realizes this maximum. If I give you any function that's concave, that's all clear, right? I mean, if I tell you the function that we have here is that the form ax minus b of x, then I'm just going to have something that inverts b prime. But how do I do that exactly? It's not clear. And so what we do is we do a quadratic approximation, which should be true approximately everywhere, right? So I'm at this point here, I'm going to say, oh, I'm close to being that function. And if I'm at this point here, I'm going to be close to being that function. And for this function, I can actually optimize. And so if I'm not moving too far from one to the other, I should actually get something. So here's how the quadratic approximation works. I'm going to write the second order Taylor expansion. OK. And so that's just going to be my quadratic approximation. It's going to say, oh, f of x, when x is close to some point x0, is going to close to f of x0 plus the gradient of f at x0 transpose x minus x0. And then I'm going to have plus 1/2x minus x0 transpose Hf at 0x x minus x transpose, right-- x minus x0. So that's just my second order Taylor expansion multi-variate 1. And let's say x0 is this guy. Now, what I'm going to do is say, OK, if I wanted to set this derivative of this guy equal to 0, I would just have to solve, well, you know, f prime of x equals 0, meaning that X has to be f prime inverse of 0. And really apart from like being some notation manipulation, this is really not helping me. OK. Because I don't know what f prime inverse of 0 is in many instances. However if f has a very specific form which is something that depends on x in a very specific way, there's just a linear term and then a quadratic term, then I can actually do something. So let's forget about this approach. And rather than minimizing f, let's just minimize the right-hand side. OK. So sorry- maximize. So maximize the right-hand side. And so how do I get this? Well, I just set the gradient equal to 0. So what is the gradient? The first term does not depend on x. So that means that this is going to be 0 plus-- what is the gradient of this thing, of the gradient of f at x0 transpose x minus x0? What is the gradient of this guy? So I have a function of the form b transpose x. What is the gradient of this thing? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm sorry? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm writing everything in two-column form, right? So it's just b. OK. So here, what is b? Well, it's gradient of f at x0. OK. And this term here gradient of f at x0 transpose x0 is just a constant. This thing is going away as well. And then I'm looking at the derivative of this guy here. And this is like a quadratic term. It's like H times x minus x0 squared. So when I'm going to take the derivative, I'm going to have a factor 2 that's going to pop out and cancel this one half. And then I'm going to be left only with this part times this part. OK. So that's plus Hf x minus x0. OK. So that's just a gradient. And I want it to be equal to 0. So I'm just going to solve this equal to 0. OK? So that means that if I want to find the minimum, this is just going to be the x* that satisfies this. So that's actually equivalent to Hf times x* is equal to Hf x0 minus gradient f at x0. Now, this is a much easier thing to solve. What is this? This is just a system of linear equations, right? I just need to find the x* such that when I pre-multiply it by a matrix I get this vector on the right-hand side. This is just something of the form ax equals b. And I have many ways I can do this. I could do Gaussian elimination, or I could use Spielman's fast Laplacian solvers if I had some particular properties of H. I mean, there's huge activity in terms of how to solve those systems. But let's say I have some time. It's not a huge problem. I can actually just use linear algebra. And linear algebra just tells me that x* is equal to Hf inverse times this guy, which those two guys are going to cancel. So this is actually equal to x0 minus Hf inverse gradient f at x0. And that's just what's called a Newton iteration. I started at some x0. I'm at some x0, where I make my approximation. And it's telling me starting from this x0, I wanted to fully optimize a quadratic approximation, I would just have to take the x*. That's this guy. And then I could just use this guy as my x0 and do it again, and again, and again, and again. And those are called Newton iterations. And they're basically the workhorse of interior point methods, for example, a lot of optimization algorithms. And that's what you can see here. x* is equal to x0 minus the inverse Hessian times the gradient. We briefly mentioned gradient descent. We briefly mentioned gradient decent, at some point, to optimize the convex function, right? And if I wanted to use gradient descent, again, H is a matrix. But if I wanted to think of H as being a scalar, would it be a positive or negative number? Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Why? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. So that would be this. So I want to move against the gradient to do what? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: To minimize. But I'm maximizing here, right? Everything is maximized, right? So I know that H is actually negative definite. So it's a negative number. So you have the same confusions they do. We're maximizing a concave function here. So H is negative. So this is something of the form x0 plus something times the gradient. And this is what your gradient ascent, rather than descent, would look like. And all it's saying, Newton is telling you don't take the gradient for granted as a direction in which you want to go. It says, do a slight change of coordinates before you do this according to what your Hessian looks like, all right? And those are called second order methods that require knowing what the Hessian is. But those are actually much more powerful than the gradient descent, because they're using all of the local geometry of the problem. All of the local geometry of your function is completely encoded in this Hessian. And in particular it implies that it tells you where to switch and not to go slower in some places or go faster in other places. Now, this in practice for, say, modern large scale machine learning problems, inverting this matrix H is extremely painful. It takes too much time. The matrix is too big, and computers cannot do it. And people resort to what's called pseudo-Newton method, which essentially tries to emulate what this guy is. And there's many ways you can do this. Some of them is by using gradients that you've collected in the past. Some of them just say, well let's just pretend H is diagonal. There's a lot of things you can do to just play around this and not actually have to invert this matrix. OK? So once you have this, you started from edge 0. It tells you which H* you can get as a maximizer of the local quadratic approximation to your function. You can actually just iterate that, all right? So you start at some x0 somewhere. And then once you get to some xk, you just do the iteration which is described, which is just find a k plus 1, which is the maximizer of the local quadratic approximation to your function at xk and repeat until convergence. OK. So if this was an optimization class, we would prove that convergence actually, eventually, happens for a strictly concave function. This is a stats class, so you're just going to have to trust me that this is the case. And it's globally convergent, meaning that you can start wherever you want, and it's going to work for under minor conditions on f. And in particular, those conditions are satisfied for the log-likelihood functions we have in mind. OK. And it converges at an extremely fast rate. Usually it's quadratic convergence, which means that every time you make one step, you improve the accuracy of your solution by two digits. If that's something you're vaguely interested in, I highly recommend that you take a class on them in your optimization. It's a fascinating topic. Unfortunately, we don't have much time, but it starts being more and more intertwined with high dimensional statistics and machine learning. I mean, it's an algorithms class, typically. But it's very much more principled. It's not a bunch of algorithms that solve a bunch of problems. There's basically one basic idea, which is if I have a convex function, I can actually minimize it. If I have a concave function, I can maximize it. And it evolves around a similar thing. So let's stare at this iterative step for a second and pause. And let me know if you have any questions. OK. So, of course, in a second we will plug in for the log-likelihood. This is just a general thing for a general function f. But then in a second, f is going to be ln. OK. So if I wanted to implement that for real, I would have to compute the gradient of ln at a point xk. And I would have to compute the Hessian at a given point and invert it. OK. So this is just the basic algorithm. And this, as you can tell, used in no place the fact that ln was the log-likelihood associated to some canonical exponential family in a generalized linear model. This never showed up. So can we use that somehow? Optimization for longest time was about making your problems as general as possible accumulating maybe in the interior point method theory in Koenig programming in the mid-'90s. And now what optimization is doing is that it's [INAUDIBLE] very general. It' says, OK, if I want to start to go fast, I need to exploit as much structure about my problem as I can. And the beauty is that as statisticians are a machine learning people, we do have a bunch of very specific problem that we want optimizers to solve. And they can make things run much faster. But this did not require to wait until the 21st century. Problems with very specific structure arose already in this generalized linear model. So what do we know? Well, we know that this log-likelihood is really one thing that comes when we're trying to replace an expectation by an average, and then doing something fancy, right? That was our statistical hammer. And remember when we introduced likelihood maximization we just said, what do we really want to do is to minimize the KL, right? That's the thing we wanted to minimize, the KL divergence between two distributions, the true one and the one that's parameterized by some unknown theta. And we're trying to minimize that over theta. And we said, well, I don't know what this is, because it's an expectation with respect to some known distribution. So let me just replace the expectation with respect to my unknown distribution by an average over my data points. And that's how we justified the existence of the log-likelihood maximization problem. But here, actually, I might be able to compute this expectation, at least partially where I need it. And what we're going to do is we're going to say, OK, since at a given point xk, say, let me call it here theta, I'm trying to find the inverse of the Hessian of my log-likelihood, right? So if you look at the previous one, as I said, we're going to have to compute the Hessian H sub l n of xk, and then invert it. But let's forget about the inversion step for a second. We have to compute the Hessian. This is the Hessian of the function we're trying to minimize. But if I could actually replace it not by the function I'm trying to minimize to maximize or the log-likelihood, but really by the function I wish I was actually minimizing, which is the KL, right? Then that would be really nice. And what happens is that since I'm actually trying to find this at a given xk, I can always pretend that this xk that I have in my current iteration is the true one and compute my expectation with respect to that guy. And what happens is that I know that when I compute the expectation of the Hessian of the log-likelihood at a given theta and when I take the expectation with respect to the same theta, what I get out is negative Fisher information. The Fisher information was defined in two ways-- as the expectation of the square of the derivative or negative of the expectation of the second derivative of the log-likelihood. And so now, I'm doing some sort of a leap of faith here. Because there's no way the theta, which is the current xk, that's the current theta at which I'm actually doing this optimization-- I'm actually pretending that this is the right one. But what's going to change by doing this is that it's going to make my life easier. Because when I take expectations, we'll see that when we look at the Hessian, the Hessian as essentially the derivative of, say, a product is going to be the sum of two terms, right? The derivative of u times v is u prime v plus uv prime. One of those two terms is actually going to have expectation 0. And that's going to make my life very easy when I take expectations and basically just have one term that's going to go away. And so in particular, my formula, just by the virtue of taking this expectation before inverting the Hessian, is going to just shrink the size of my formulas by half. OK. So let's see how this works. You don't have to believe me. Is there any question about this slide? You guys remember when we were doing maximum estimation and Fisher information and the KL divergence, et cetera? Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Because that's what we're really trying to minimize. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. So there's something you need to trust me with, which is that the expectation of H of ln is actually H of the expectation of ln, all right? Yeah, it's true, right? Because taking derivative is a linear operator. And we actually used that several times when we said expectation of partial of l with respect to theta is equal to 0. Remember we did that? That's basically what we used, right? AUDIENCE: ln is the likelihood. PHILIPPE RIGOLLET: It's the log-likelihood. AUDIENCE: Log-likelihood, [INAUDIBLE] OK. When we did Fisher [INAUDIBLE],, we did the likelihood of [INAUDIBLE] observation. PHILIPPE RIGOLLET: Yeah. AUDIENCE: Why is it ln in this case? PHILIPPE RIGOLLET: So actually, ln is typically not normalized. So I really should talk about ln over n. OK. But let's see that, OK? So if I have IID observations, that should be pretty obvious. OK. So if I have IID x1, xn, with density f theta and if I look at log f theta of Xi, sum from i equal 1 to n, as I said, I need to actually have a 1 over n here. When I look at the expectation, they all have the same expectation, right? So this is actually, indeed, equal to negative KL plus a constant. OK? And negative KL is because this-- sorry, if I look at the expectation. So the expectation of this guy is just the expectation of one of them, all right? So I just do expectation theta. OK? Agree? Remember, the KL was expectation theta log f theta divided by f. So that's between p theta and p theta prime. Well, no, sorry. That's the true p. And let's call it f. p theta, right? So that's what showed up, which is, indeed, equal to minus expectation theta log f theta plus log of f, which is just a constant with respect to theta. It's just the thing that's up doesn't matter. OK? So this is what shows up here. And just the fact that I have this 1 over n doesn't change, because they're IID. Now, when I have things that are not IID-- because what I really had was Y1 Yn, and Yi at density f theta i, which is just the conditional density given Xi, then I could still write this. And now when I look at the expectation of this guy, what I'm going to be left with is just 1 over n sum from i equal 1 to n of the expectation of log f theta i of Yi. And it's basically the same thing, except that I have a 1 over n expectation in front. And I didn't tell you this, because I only showed you what the KL divergence was for between two distributions. But here, I'm telling you what the KL is between two products of distributions that are independent, but not necessarily identically distributed. But that's what's going to show up, just because it's a product of things. So when you have the log, it's just going to be a sum. Other questions? All right, so what do we do here? Well, as I said, now we know that the expectation of H is negative Fisher information. So rather than putting H inverse in my iterates for Newton-Raphson, I'm just going to put the inverse Fisher information. And remember, it had a minus sign in front. So I'm just going to pick up a plus sign now, just because i is negative, the expectation of the Hessian. And this guy has, essentially, the same convergence properties. And it just happens that it's easier to compute the i than H Ln. And that's it. That's really why you want to do this. Now, you might say that, well, if I use more information, I should do better, right? But it's actually not necessarily true for several reasons. But let's say that one is probably the fact that I did not use more information. Every step when I was computing this thing at xk, I actually pretended that at theta k the true distribution was the one distributed according to theta k. And that was not true. This is only true when theta k becomes close to the true theta. And so in a way, what I gained I lost again by making this thing. It's just really a matter of simple computation. So let's just see it on a particular example. Actually, in this example, it's not going to look much simpler. It's actually going to be the same. All right, so I'm going to have the Bernoulli example. All right, so we know that Bernoulli belongs to the canonical exponential family. And essentially, all I need to tell you what b is. And b of theta for Bernoulli is log 1 plus e theta, right? We computed that. OK. And so when I look at my log-likelihood, it is going to look like the sum from i equal 1 to n of Yi of-- OK, so here I'm going to actually use the canonical link. So it's going to be Xi transpose beta minus log 1 plus exponential Xi transpose beta. And phi for this guy is equal to 1. Is it clear for everyone what I did? OK. So remember the density, so that was really just-- so the PMF was exponential Y theta minus log 1 plus e theta. There was actually no normalization. That's just the density of a Bernoulli. And the theta is actually log p over 1 minus p. And so that's what actually gives me what my-- since p is the expectation, this is actually giving me also my canonical link, which is the log [? at link. ?] We saw that last time. And so if I start taking the log of this guy and summing over n and replacing theta by Xi transpose beta, which is what the canonical link tells me to do, I get this guy. Is that clear for everyone? If it's not, please redo this step on your own. OK. So I want to maximize this function. Sorry. So I want to maximize this function over there on the first line as a function of beta. And so to do this, I want to use either Newton-Raphson or what I call Fisher-scoring. So Fisher-scoring is the second one, when you replace the Hessian by negative Fisher information. So I replace these two things. And so I first take the gradient. OK. So let's take the gradient of ln. So the gradient of ln is going to be, well, sum-- so here, this is of the form Yi, which is a scalar, times a vector, Xi beta. That's what I erased from here. The gradient of b transpose x is just b. So here, I have just Yi Xi. So that's of the form Yi, which is a scalar, times Xi, which is a vector. Now, what about this guy? Well, here I have a function. So I'm going to have just the usual rule, the chain rule, right? So that's just going to be 1 over this guy. And then I need to find the Hessian of this thing. So the 1 is going away. And then I apply the chain rule again. So I get e of Xi transpose beta, and then the Hessian of this thing, which is Xi. So my Hessian-- my radiant, sorry, I can actually factor out all my Xi's. And it's going to look like this. My gradient is a weighted average or weighted sum of the Xi's. This will always happen when you have a generalized linear model. And that's pretty clear. Where did the Xi show up? Whether it's from this guy or that guy, the Xi came from the fact that when I take the gradient of Xi transpose beta, I have this vector Xi that comes out. It's always going to be the thing that comes out. So I will always have something that looks like with some sum with some weights here of the Xi's. Now, when I look at the second derivative-- so same thing, I'm just going to take the derivative this guy. Since nothing depends on beta here or here, I'm just going to have to take the derivative of this thing. And so it's going to be equal. So if I look now at the Hessian ln as a function of beta, I'm going to have sum from i equal 1 to n of, well, Yi-- what is a derivative of Yi with respect to beta? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: What? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. 0. OK? It doesn't depend on data. I mean, this distribution does. But Y itself is just a number, right? So this is 0. So I'm going to get the minus. And then I'm going to have, again, the chain rule that shows up. So I need to find the derivative of x over 1 plus x. What is the derivative of x over 1 plus x. Actually don't even know. So that gives me-- OK. So that's 1 over 1 plus x squared. So that's minus e Xi transpose beta-- sorry, 1 divided by 1 plus e Xi transpose beta squared times the derivative of the exponential, which is e Xi transpose beta and again, Xi. And then I have this Xi that shows up. But since I'm looking for a matrix, I'm going to have Xi, Xi transpose, right? OK? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: So I know I'm going to need something that looks like a matrix in the end. And so one way you want to think about it is this is going to spit out an Xi. There's already an Xi here. So I'm going to have something that looks like Xi. And I'm going to have to multiply by it another vector Xi. And I want it to form a matrix. And so what you need to do is to take an outer product. And that's it. So now as a result, the updating rule is this. Honestly, this is not a result of anything. I actually rewrote everything that I had before with a theta replaced by beta, because it's just painful to rewrite this entire thing, put some big parenthesis and put minus 1 here. And then I would have to put the gradient, which is this thing here. So as you can imagine, this is not super nice. Actually, what's interesting is at some point I mentioned there's a pseudo-Newton method. They're actually doing exactly this. They're saying, oh, at each iteration, I'm actually going to just take those guys. If I'm at iteration k, I'm actually just going to sum those guys up to k rather than going all the way to n and look at every one. So you're just looking at your observations one at a time based on where you were before. OK. So you have a matrix. You need to invert it. So if you want to be able to invert it, you need to make sure that the sum with those weights of Xi outer, Xi, or Xi, Xi transpose is invertible. So that's a condition that you need to have. And well, you don't have to, because technically you don't need to invert. You just need to solve the linear system. But that's actually guaranteed in most of the cases if n is large enough. All right, so everybody sees what we're doing here? OK. So that's for the Newton-Raphson. If I wanted to actually do the Fisher-scoring, all I would need to do is to replace the Hessian here by its expectation when I pretend that the beta have, iteration k, is the true one. What is the expectation of this thing? And when I say expectation here, I'm always talking about conditional expectation of Y given X. The only distributions that matter, that have mattered in this entire chapter, are conditional expectation of Y given X. The conditional expectation of this thing given X is what? It's itself. It does not depend on Y. It only depends on the X's. So conditionally on x, this thing as far as we're concerned, is completely deterministic. So it's actually equal to it's expectation. And so in this particular example, there's no difference between Fisher-scoring and Newton-Raphson. And the reason is because the gradient no longer depends on Yi-- I'm sorry. The Hessian no longer depends on Yi. OK? This slide is just repeating some stuff that I've said. OK. So I think this is probably-- OK, let's go through this actually. At some point, I said that Newton-Raphson-- do you have a question? AUDIENCE: Yeah. When would the gradient-- sorry, the Hessian ever depend on Yi? Because it seems like Yi is just-- or at least when you have a canonical link, that the log-likelihood is just [INAUDIBLE] to Yi Xi [INAUDIBLE] theta and that's the only place Y shows up. So [INAUDIBLE] derivative [INAUDIBLE] never depend on Y? PHILIPPE RIGOLLET: Not when you have a canonical link. AUDIENCE: So if it's not a [INAUDIBLE] there's is no difference between-- PHILIPPE RIGOLLET: No. AUDIENCE: OK. PHILIPPE RIGOLLET: Yeah. So yeah, maybe I wanted you to figure that out for yourself. OK. So Yi times Xi transpose beta. So essentially, when I have a general family, what he's referring to is that this is just b of Xi transpose beta. So I'm going to take some derivatives. And there's going to be something complicated coming out of this. But I'm certainly not going to have some Yi showing up. The only place where Yi shows up is here. Now, if I take two derivatives, this thing is gone, because it's linear. The first one is going to keep on like this guy. And the second one is going to make it go on. The only way this actually shows up is when to have an H here. And if I have an H, then I can take second derivatives. And this thing is not going to be completely independent of beta. Sorry. Yeah, this thing is still going to depend on beta, which means that this Yi term is not going to disappear. I believe we'll see an example of that, or maybe I removed it. I'm not sure, actually. I think we will see an example. So let us do a Iteratively Re-weighted Least Squares, or IRLS, which I've actually recently learned is a term that even though it was defined in the '50s, people still feel free to use to define to call their new algorithms which have nothing to do with this. This is really something where you actually do iteratively re-weighted least squares. OK. Let's just actually go through this quickly what is going to be iteratively re-weighted least squares. The way the steps that we had here showed up-- let's say those guys, x*-- is this, is when were actually solving this linear system, right? That was the linear system we were trying to solve. But solving a linear system can be done by just trying to minimize, right? If I have x a and b, it's the same as minimizing the norm of ax minus b squared over x. If I can actually find an x for which it's 0, it means that I've actually solved my problem. And so that means that I can solve linear systems by solving least square problems. And least square problems are things that statisticians are comfortable solving. And so all I have to do is to rephrase this as at least square problem. OK? And you know, I could just write it directly like this. But there's a way to streamline it a little bit. And that's actually by using weights. OK. So I've come in the weights-- well, not today, actually, but very soon, all right? So this is just a reminder of what we had. We have that's Yi give Xi as a distribution distributed according to some distribution in the canonical exponential family. So that means that the log-likelihood looks like this. Again, this does not matter to us. This is the form that matters. And we have a bunch of relationships that we actually spent some time computing. The first one is that mu is b prime of theta i. The second one is that if I take g of mu i, I get this systematic component, Xi transpose beta that's modeling. Now, if I look at the derivative mu i with respect to theta i, this is the derivative of b prime of theta i with respect to theta i. So that's the second derivative. And I'm going to call it Vi. If phi is equal to 1, this is actually the variance. And then I have this function H, which allows me to bypass altogether the existence of this parameter mu, which says if I want to go from Xi transpose beta all the way to theta i, I have to first do g inverse, and then b prime inverse. If I stopped here, I would just have mu. OK? OK. So now what I'm going to do is I'm going to apply the chain rule. And I'm going to try to compute the derivative of my log-likelihood with respect to beta. So, again, the log-likelihood is much nicer when I read it as a function of theta than a function of beta, but it's basically what we've been doing by hand. You can write it as a derivative with respect to theta first, and then multiply by the derivative of theta with respect to beta. OK. And we know that theta depends on beta as H of Xi transpose beta. OK? I mean, that's basically what we've been doing for the Bernoulli case. I mean, we used the chain rule without actually saying it. But this is going to be convenient to actually make it explicitly show up. OK. So when I first take the derivative of my log-likelihood with respect to theta, I'm going to use the fact that my canonical family is super simple. OK. So what I have is that my log-likelihood ln is the sum from i equal 1 to n of Yi theta i minus b of theta i divided by phi plus some constant, which will go away as soon as I'm going to take my first derivative. So if I take the derivative with respect to theta i of this guy, this is actually going to be equal to Yi minus b prime theta i divided by phi. And then I need to multiply by the derivative of theta i with respect to beta. Remember, theta is H of Xi transpose beta. So the derivative of theta i with respect to beta j, this is equal to H prime of Xi transpose beta. And then I have the derivative of this guy. Actually, let me just do the gradient of theta I at beta, right? That's what we did. I'm just thinking of theta i as being a function of theta. So what should I add here? It was just the vector Xi, which is just the chain rule again. That's Hi prime, right? You don't see it, but there's a prime here that's derivative. OK. We've done that without saying it explicitly. So now if I multiply those two things I have this Yi minus b prime of the theta i, which I call by its good name, which is mu i. b prime of theta i is the expectation of Yi conditionally on Xi. And then I multiply by this thing here. So here, this thing is written coordinate by coordinate. But I can write it as a big vector when I stack them together. And so what I claim is that this thing here is of the form Y minus mu. But here I put some tildes. Because what I did is that first I multiplied everything by g prime of mu for each mu. OK. So why not? OK. Actually, on this slide it will make no sense why I do this. I basically multiply by g prime on one side and divide by g prime on the other side. So what I write so far is that the gradient of ln with respect to beta is the sum from i equal 1 to n of Yi minus mu i, let's call it, divide by phi times H prime of Xi transpose beta Xi. OK. So I just stacked everything that's here. And now I'm going to start calling things. The first thing I'm going to do is I'm going to divide. So this guy here I'm going to push here. Now, this guy here I'm actually going to multiply by g prime of mu i. And this guy I'm going to divide by g prime of mu i. So there's really nothing that happened here. I just took g prime and multiply and divide it by g prime. Why do I do this? Well, that's actually going to be clear when we talk about iteratively re-weighted least squares. But now, essentially I have a new mu, a Y which is-- so this thing now is going to be Y tilde minus mu tilde, so i, i. Now, this guy here I'm going to call Wi. And I have the Xi that's there, which means that now the thing that I have here I can write as follows. Gradient ln of beta is equal to what? Well, I'm going to write it in matrix forms. So I have the sum over i of something multiplied by Xi. So I'm going to write it as x transpose. Then I'm going to have this matrix W1 Wn, and then 0 elsewhere. And then I'm going to have my Y tilde minus mu. And remember, X is the matrix with-- sorry, it should be a bit [INAUDIBLE]. I have n, and then p. And here I have my Xi j in this matrix on row i and column j. And this is just a matrix that has the Wi's on the diagonal. And then I have Y tilde minus mu. So this is just the matrix we're writing of this formula. All right. So it's just saying that if I look at the sum of weighted things of my columns of Xi, it's basically the same thing. When I'm going to multiply this by my matrix, I'm going to get exactly those terms, right? Yi minus mu i tilde times Wi. And then when I actually take this Xi transpose times this guy, I'm really just getting the sum of the columns with the weights, right? Agree? If I look at this thing here, this is a vector that has S coordinates, Wi times Yi tilde minus mu i tilde. And I have n of them. So when I multiply X transpose by this guy, I'm just looking at a weighted sum of the columns of X transpose, which is a weighted sum of the rows of X, which are exactly my Xi's. All right, and that's this weighted sum of the Xi's. OK. So here, as I said, the fact that we decided to put this g prime of mu i here and g prime of mu i here, we could have not done this, right? We could have just said, I forget about the tilde and just call it Yi minus mu i. And here, I just put everything I don't know into some Wi. And so why do I do this? Well, it's because when I actually start looking at the Hessian, what's going to happen? AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: Yeah. We'll do that next time. But let's just look quickly at the outcome of the computation of my Hessian. So I compute a bunch of second derivatives. And here, I have two terms, right? Well, he's gone. So I have two terms. And when I take the expectation now, it's going to actually change, right? This thing is actually going to depend on Yi. Because I have an H which is not the identity. Oh, no, you're here, sorry. So when I start looking at the expectation, so I look at the conditional expectation given Xi. The first term here has a Yi minus expectation. So when I take the conditional expectation, this is going to be 0. The first term is going away when I take the conditional expectation. But this was actually gone already if we had the canonical term, because the second derivative of H when H is the identity is 0. But if H is not the identity, H prime prime may not be 0. And so I need that part to remove that term. And so now, you know, I work a little bit, and I get this term. That's not very surprising. In the second derivative, I see I have terms in b prime prime. I have term in H prime, but squared. And then I have my Xi outer Xi, Xi, Xi transpose, which we know we would see. OK. So we'll go through those things next time. But what I want to show you is that now once I compute this, I can actually show that if I look at this product that showed up, I had b prime prime times H prime squared. One of those terms is actually 1 over g prime. And so I can rewrite it as one of the H primes, because I had a square, divided by g prime. And now, I have this Xi Xi transpose. So if I did not put the g prime in the W that I put here completely artificially, I would not be able to call this guy Wi, which is exactly what it is from this board. And now that this guy is Wi, I can actually write this thing here as X transpose WX. OK? And that's why I really wanted my W to have this g prime of mu i in the denominator. Because now I can actually write a term that depends on W. Now, you might say, how do I reconcile those two things? What the hell are you doing? And what the hell I'm doing is essentially that I'm saying that if you write beta k according to the Fisher-scoring iterations, you can actually write it as just this term here, which is of the form X transpose X inverse X transpose Y. But I actually squeezed in these W's. And that's actually a weighted least square. And it's applied to this particular guy. So we'll talk about those weighted least squares. But remember, least squares is of the form-- beta hat is X transpose X inverse X transpose Y. And here it's basically the same thing, except that I squeeze in some W after my X transpose. OK. So that's how we're going to solve it. I don't want to go into the details now, mostly because we're running out of time. Are there any questions?
MIT_18650_Statistics_for_Applications_Fall_2016
13_Regression.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit [email protected]. PHILIPPE RIGOLLET: It's because if I was not, this would be basically the last topic we would ever see. And this is arguably, probably the most important topic in statistics, or at least that's probably the reason why most of you are taking this class. Because regression implies prediction, and prediction is what people are after to now, right? You don't need to understand what the model for the financial market is if you actually have a formula to predict what the stock prices are going to be tomorrow. And regression, in a way, allows us to do that. And we'll start with a very simple version of regression, which is linear regression, which is the most standard one. And then we'll move on to slightly more advanced notions such as nonparametric regression. At least, we're going to see the principles behind it. And I'll touch upon a little bit of high dimensional regression, which is what people are doing today. So the goal of regression is to try to predict one variable based on another variable. All right, so here the notation is very important. It's extremely standard. It goes everywhere essentially, and essentially you're trying to explain why as a function of x, which is the usual y equals f of x question-- except that, you know, if you look at a calculus class, people tell you y equals f of x, and they give you a specific form for f, and then you do something. Here, we're just going to try to estimate what this length function is. And this is why we often call y the explained variable and x the explanatory variable. All right, so we're statisticians, so we start with data. All right, then what does our data look like? Well, it looks like a bunch of input, output to this relationship. All right, so we have a bunch of xi, yi. Those are pairs, and I can do a scatterplot of those guys. So each point here has a x-coordinate, which is xi, and a y-coordinate, which is yi, and here, I have a bunch of endpoints. And I just draw them like that. Now, the functions we're going to be interested in are often function of the form y equals a plus b times x, OK. And that means that this function looks like this. So if I do x and y, this function looks exactly like a line, and clearly those points are not on the line. And it will basically never happen that those points are on a line. There's a famous T-shirt from, I think, U.C. Berkeley's staff department, that shows this picture and put a line between them like we're going to see it. And it says, oh, statisticians, so many points, and you still managed to miss all of them. And so essentially, we don't believe that this relationship y is equal to a plus bx is true, but maybe up to some noise. And that's where the statistics is going to come into play. There's going to be some random noise that's going to play out, and hopefully the noise is going to be spread out evenly, so that we can average it if we have enough points. Average it out, OK. And so this epsilon here is not necessarily due to randomness. But again, just like we did modeling in the first place, it essentially accounts for everything we don't understand about this relationship. All right, so for example-- so here, I'm not going to be-- give me one second, so we'll see an example in a second. But the idea here is that if you have data, and if you believe that it's of the form, a plus b x plus some noise, you're trying to find the line that will explain your data the best, right? In the terminology we've been using before, this would be the most likely line that explains the data. So we can see that it's slightly-- we've just added another dimension to our statistical problem. We don't have just x's, but we have y's, and we're trying to find the most likely explanation of the relationship between y and x. All right, and so in practice, the way it's going to look like is that we're going to have basically two parameters to find the slope b and the intercept a, and given data, the goal is going to be to try to find the best possible line. All right? So what we're going to find is not exactly a and b, the ones that actually generate the data, but some estimators of those parameters, a hat and b hat constructed from the data. All right, so we'll see that more generally, but we're not going to go too much in the details of this. There's actually quite a bit that you can understand if you do what's called univariate regression when x is actually a real valued random variable. So when this happens, this is called univariate regression. And when x is in rp for p larger than or equal to 2, this is called multivariate regression. OK, and so here we're just trying to explain y is a plus bx plus epsilon. And here we're going to have something more complicated. We're going to have y, which is equal to a plus b1, x1 plus b2, x2 plus bp, xp plus epsilon-- where x is equal to-- the coordinates of x are given by x1, 2xp, rp. OK, so it's still linear. Right, they still add all the coordinates of x with a coefficient in front of them, but it's a bit more complicated than just one coefficient for one coordinate of x, OK? So we'll come back to multivariate regression. Of course, you can write this as x transpose b, right? So this entire thing here, this linear combination is of the form x transpose b, where b is the vector that has coordinates b1 to bp. OK? Sorry, here, it's in [? rd, ?] p is the natural notation. All right, so our goal here, in the univariate one, is to try to write the model, make sense of this little twiddle here-- essentially, from a statistical modeling question, the question is going to be, what distributional assumptions do you want to put on epsilon? Are you going to say they're Gaussian? Are you going to say they're binomial? OK, are you going to say they're binomial? Are you going to say they're Bernoulli? So that's going to be what we we're going to make sense of, and then we're going to try to find a method to estimate a and b. And then maybe we're going to try to do some inference about a and b-- maybe test if a and b take certain values, if they're less than something, maybe find some confidence regions for a and b, all right? So why would you want to do this? Well, I'm sure all of you have an application, if I give you some x, you're trying to predict what y is. Machine learning is all about doing this, right? Without maybe trying to even understand the physics behind this, they're saying, well, you give me a bag of words, I want to understand whether it's going to be a spam or not. You give me a bunch of economic indicators, I want you to tell me how much I should be selling my car for. You give me a bunch of measurements on some patient, I want you to predict how this person is going to respond to my drug-- and things like this. All right, and often we actually don't have much modeling intuition about what the relationship between x and y is, and this linear thing is basically the simplest function we can think of. Arguably, linear functions are the simplest functions that are not trivial. Otherwise, we would just say, well, let's just predict x of y to be a constant, meaning it does not depend on x. But if you want it to depend on x, then your functions are basically as simple as it gets. It turns out, amazingly, this does the trick quite often. So for example, if you look at economics, you might want to assume that the demand is a linear function of the price. So if your price is zero, there's going to be a certain demand. And as the price increases, the demand is going to move. Do you think b is going to be positive or negative here? What? Typically, it's negative unless we're talking about maybe luxury goods, where you know, the more expensive, the more people actually want it. I mean, if we're talking about actual economic demand, that's probably definitely negative. It doesn't have to be, you know, clearly linear, so that you can actually make it linear, transform it into something linear. So for example, you have this like multiplicative relationship, PV equals nRT, which is the Ideal gas law. If you want to actually write this relationship, if you want to predict what the pressure is going to be as a function of the volume and the temperature-- and well, let's assume that n is the Avogadro constant, and let's assume that the radius is actually fixed. Then you take the log on each side, so you get PV equals nRT. So what that means is that log PV is equal to log nRT. So that means log P plus log V is equal to the log nR plus log T. So we said that R is constant, so this is actually your constant. I'm going to call it a. And then that means that log P is equal to minus log V. That log P is equal to a minus log V plus log T. OK? And so in particular, if I write b equal to negative 1 and c equal to plus 1, this gives me the formula that I have here. Now again, it might be the case that this is the ideal gas law. So in practice, if I start recording pressure, and temperature, and volume, I might make measurement errors, there might be slightly different conditions in such a way that I'm not going to get exactly those. And I'm just going to put this little twiddle to account for the fact that the points that I'm going to be recording for log pressure, log volume, and log temperature are not going to be exactly on one line. OK, they're going to be close. Actually, in those physics experiments, usually, they're very close because the conditions are controlled under lab experiments. So it means that the noise is very small. But for other cases, like demand and prices, it's not a law of physics, and so this must change. Even the linear structure is probably not clear, right. At some points, there's probably going to be some weird curvature happening. All right, so this slide is just to tell you maybe you don't have, obviously, a linear relationship, but maybe you do if you start taking logs exponentials, squares. You can sometimes take the product of two variables, things like this, right. So this is variable transformation, and it's mostly domain-specific, so we're not going to go into more details of this. Any questions? All right, so now I'm going to be giving-- so if we start thinking a little more about what these coefficients should be, well, remember-- so everybody's clear why I don't put the little i here? Right, I don't put the little i because I'm just talking about a generic x and a generic y, but the observations are x1, y1, right. So typically, on the blackboard I'm often going to write only xy, but the data really is x1, y1, all the way to xn, yn. So those are those points in this two dimensional plot. But I think of those as being independent copies of the pair xy. They have to have-- to contain their relationship. And so when I talk about distribution of those random variables, I talk about the distribution of xy, and that's the same. All right, so the first thing you might want to ask is, well, if I have an infinite amount of data, what can I hope to get for a and b? If my simple size goes to infinity, then I should actually know exactly what the distribution of xy is. And so there should be an a and a b that captures this linear relationship between y and x. And so in particular, we're going to try to ask the population, or theoretic, values of a and b, and you can see that you can actually compute them explicitly. So let's just try to find how. So as I said, we have a bunch of points on this line close to a line, and I'm trying to find the best fit. All right, so this guy is not a good fit. This guy is not a good fit. And we know that this guy is a good fit somehow. So we need to mathematically formulate the fact that this line here is better than this line here or better than this line here. So what we're trying to do is to create a function that has values that are smaller for this curve and larger for these two curves. And the way we do it is by measuring the fit, and the fit is essentially the aggregate distance of all the points to the curve. And there's many ways I can measure the distance to a curve. So if I want to find so-- let's just open a parenthesis. If I have a point here-- so we're going to do it for one point at a time. So if I have a point, there's many ways I can measure its distance to the curve, right? I can measure it like that. That is one distance to the curve. I can measure it like that by having a right angle here that is one distance to the curve. Or I can measure it like that. That is another distance to the curve, right. There's many ways I can go for it. It turns out that one is actually going to be fairly convenient for us, and that's the one that says, let's look at the square of the value of x on the curve. So if this is the curve, y is equal to a plus bx. Now, I'm going to think of this point as a random point, capital X, capital Y, so that means that it's going to be x1, y1 or x2, y2, et cetera. Now, I want to measure the distance. Can somebody tell me which of the three-- the first one, the second one, or the third one-- this formula, expectation of y minus a minus bx squared is-- which of the three is it representing? AUDIENCE: The second one. PHILIPPE RIGOLLET: The second one where I have the right angle? OK, everybody agrees with this? Anybody wants to vote for something else? Yeah? AUDIENCE: The third one? PHILIPPE RIGOLLET: The third one? Everybody agrees with the third one? So by default, everybody's on the first one? Yeah, it is the vertical distance actually. And the reason is if it was the one with the straight angle, with the right angle, it would actually be a very complicated mathematical formula, so let's just see y, right? And by y, I mean y. OK, so this means that this is my x, and this is my y. All right, so that means that this point is xy. So what I'm measuring is the difference between y minus a plus b times x. This is the thing I'm going to take the expectation off-- the square and then the expectation-- so a plus b times x, if this is this line, this is this point. So that's this value here. This value here is a plus bx, right? So what I'm really measuring is the difference between y and N plus bx, which is this distance here. And since I like things like Pythagoras theorem, I'm actually going to put a square here before I take the expectation. So now this is a random variable. This is this random variable. And so I want a number, so I'm going to turn it into a deterministic number. And the way I do this is by taking expectation. And if you think expectations should be close to average, this is the same thing as saying, I want that in average, the y's are close to the a plus bx, right? So we're doing it in expectation, but that's going to translate into doing it in average for all the points. All right, so this is the thing I want to measure. So that's this vertical distance. Yeah? OK. This is my fault actually. Maybe we should close those shades. OK, I cannot do just one at a time, sorry. All right, so now that I do those vertical distances, I can ask-- well, now, I have this function, right-- to have a function that takes two parameters a and b, maps it to the expectation of y minus a plus bx squared. Sorry, the square is here. And I could ask, well, this is a function that measures the fit of the parameters a and b, right? This function should be small. The value of this function here, function of a and b that measures how close the point xy is to the line a plus b times x while y is equal to a plus b times x in expectation. OK, agreed? This is what we just said. Again, if you're not comfortable with the reason why you get expectations, just think about having data points and taking the average value for this guy. So it's basically an aggregate distance of the points to their line. OK, everybody agrees this is a legitimate measure? If all my points were on the line-- if my distribution-- if y was actually equal to a plus bx for some a and b then this function would be equal to 0 for the correct a and b, right? If they are far-- well, it's going to depend on how much noise I'm getting, but it's still going to be minimized for the best one. So let's minimize this thing. So here, I don't make any-- again, sorry. I don't make an assumption on the distribution of x or y. Here, I assume, somehow, that the variance of x is not equal to 0. Can somebody tell me why? Yeah? AUDIENCE: Not really a question-- the slides, you have y minus a minus bx quantity squared expectation of that, and here you've written square of the expectation. PHILIPPE RIGOLLET: No, here I'm actually in the expectation of the square. If I wanted to write the square of the expectation, I would just do this. So let's just make it clear. Right? Do you want me to put an extra set of parenthesis? That's what you want me to do? AUDIENCE: Yeah, it's just confusing with the [INAUDIBLE] PHILIPPE RIGOLLET: OK, that's the one that makes sense, so the square of the expectation? AUDIENCE: Yeah. PHILIPPE RIGOLLET: Oh, the expectation of the square, sorry. Yeah, dyslexia. All right, any question? Yeah? AUDIENCE: Does this assume that the error is Gaussian? PHILIPPE RIGOLLET: No. AUDIENCE: I mean, in the sense that like, if we knew that the error was, like, even the minus followed like-- so even the minus x to the fourth distribution, would we want to minimise the expectation of what the fourth power of y minus a equals bx in order to get [? what the ?] [? best is? ?] PHILIPPE RIGOLLET: Why? So you know the answers to your question, so I just want you to use the words that-- right, so why would you want to use the fourth power? AUDIENCE: Well, because, like, we want to more strongly penalize deviations because we'd expect very large deviations to be very rare, or more rare, than it would with the Gaussian [INAUDIBLE] power. PHILIPPE RIGOLLET: Yeah so, that would be the maximum likely estimator that you're describing to me, right? I can actually write the likelihood of a pair of numbers ab. And if I know this, that's actually what's going to come into it because I know that the density is going to come into play when I talk about there. But here, I'm just talking about-- this is a mechanical tool. I'm just saying, let's minimize the distance to the curve. Another thing I could have done is take the absolute value of this thing, for example. I just decided to take the square root before I did it. OK, so regardless of what I'm doing, I'm just taking the squares because that's just going to be convenient for me to do my computations for now. But we don't have any statistical model at this point. I didn't say anything-- that y follows this. X follows this. I'm just doing minimal assumptions as we go, all right? So the variance of x is not equal to 0? Could somebody tell me why? What would my cloud point look like if the variance of x was equal to 0? Yeah, they would all be at the same point. So it's going to be hard for me to start fitting in a line, right? I mean, best case scenario, I have this x. It has variance, zero, so this is the expectation of x. And all my points have the same expectation, and so, yes, I could probably fit that line. But that wouldn't help very much for other x's. So I need a bit of variance so that things spread out a little bit. OK, I'm going to have to do this. I think it's just my-- All right, so I'm going to put a little bit of variance. And the other thing is here, I don't want to do much more, but I'm actually going to think of x as having means zero. And the way I do this is as follows. Let's define x tilde, which is x minus the expectation of x. OK, so definitely the expectation of x tilde is what? Zero, OK. And so now I want to minimize in ab, expectation of y minus a plus b, x squared. And the way I'm going to do this is by turning x into x tilde and stuffing the extra-- and putting the extra expectation of x into the a. So I'm going to write this as an expectation of y minus a plus b expectation of x-- which I'm going to a tilde-- and plus b x tilde. OK? And everybody agrees with this? So now I have two parameters, a tilde and b, and I'm going to pretend that now x tilde-- so now the role of x is played by x tilde, which is now a centered random variable. OK, so I'm going to call this guy a tilde, but for my computations I'm going to call it a. So how do I find the minimum of this thing? Derivative equal to zero, right? So here it's a quadratic thing. It's going to be like that. I take the derivative, set it to zero. So I'm first going to take the derivative with respect to a and set it equal to zero, so that's equivalent to saying that the expectation of-- well, here, I'm going to pick up a 2-- y minus a plus bx tilde is equal to zero. And then I also have that the derivative with respect to b is equal to zero, which is equivalent to the expectation of-- well, I have a negative sign somewhere, so let me put it here-- minus 2x tilde, y minus a plus bx tilde. OK, see that's why I don't want to put too many parenthesis. OK. So I just took the derivative with respect to a, which is just basically the square, and then I have a negative 1 that comes out from inside. And then I take the derivative with respect to b, and since b has x tilde. In [? factor, ?] it comes out as well. All right, so the minus 2's really won't matter for me. And so now I have two equations. The first equation, while it's pretty simple, it's just telling me that the expectation of y minus a is equal to zero. So what I know is that a is equal to the expectation of y. And really that was a tilde, which implies that the a I want is actually equal to the expectation of y minus b times the expectation of x. OK? Just because a tilde is a plus b times the expectation of x. So that's for my a. And then for my b, I use the second one. So the second one tells me that the expectation of x tilde of y is equal to a plus b times the expectation of x tilde which is zero, right? OK? But this a is actually a tilde in this problem, so it's actually a plus b expectation of x. Now, this is the expectation of the product of two random variables, but x tilde is centered, right? It's x minus expectation of x, so this thing is actually equal to the covariance between x and y by definition of covariance. So now I have everything I need, right. How do I just-- I'm sorry about that. So I have everything I need. Now, I now have two equations with two unknowns, and all I have to do is to basically plug it in. So it's essentially telling me that the covariance of xy-- so the first equation tells me that the covariance of xy is equal to a plus b expectation of x, but a is expectation of y minus b expectation of x. So it's-- well, actually, maybe I should start with b. Oh, sorry. OK, I forgot one thing. This is not true, right. I forgot this term. x tilde multiplies x tilde here, so what I'm left with is x tilde-- it's minus b times the expectation of x tilde squared. So that's actually minus b times the variance of x tilde because x tilde is already centered, which is actually the variance of x. So now I have that this thing is actually a plus b expectation of x minus b variance of x. And I also have that a is equal to expectation of y minus b expectation of x. So if I sum the two, those guys are going to cancel. Those guys are going to cancel. And so what I'm going to be left with is covariance of xy is equal to expectation of x, expectation of y, and then I'm left with this term here, minus b times the variance of x. And so that tells me that b-- why do I still have the variance there? AUDIENCE: So is the covariance really the expectation of x tilde times y minus expectation of y? Because y is not centered, correct? PHILIPPE RIGOLLET: Yeah. AUDIENCE: OK, but x is still the center. PHILIPPE RIGOLLET: But x is still the center, right. So you just need to have one that's centered for this to work. Right, I mean, you can check it. But basically when you're going to have the product of the expectations, you only need one of the two in the product to be zero. So the product is zero. OK, why do I keep my-- so I get a, a, and then the b expectation. OK, so that's probably earlier that I made a mistake. So I get-- so this was a tilde. Let's just be clear about the-- So that tells me that a tilde-- maybe it's not super fair of me to-- yeah, OK, I think I know where I made a mistake. I should not have centered. I wanted to make my life easier, and I should not have done that. And the reason is a tilde depends on b, so when I take the derivative with respect to b, what I'm left with here-- since a tilde depends on b, when I take the derivative of this guy, I actually don't get a tilde here, but I really get-- so again, this was not-- so that's the first one. This is actually x here-- because when I take the derivative with respect to b. And so now, what I'm left with is that the expectation-- so yeah, I'm basically left with nothing that helps. So I'm sorry about. Let's start from the beginning because this is not getting us anywhere, and a fix is not going to help. So let's just do it again. Sorry about that. So let's not center anything and just do brute force because we're going to-- b x squared. All right. Partial, with respect to a, is giving equal zero is equivalent, so my minus 2 is going to cancel, right. So I'm going to actually forget about this. So it's actually telling me that the expectation of y minus a plus bx is equal to zero, which is equivalent to a plus b expectation of x, is equal to the expectation of y. Now, if I take the derivative with respect to b and set it equal to zero, this is telling me that the expectation of-- well, it's the same thing except that this time I'm going to pull out an x. This guy is equal to zero-- this guy is not here-- and so that implies that the expectation of xy is equal to a times the expectation of x, plus b times the expectation of x square. OK? All right, so the first one is actually not giving me much, so I need to actually work with the two of those guys. So I'm going to take the first-- so let me rewrite those two inequalities that I have. I have a plus b, e of x is equal to e of y. And then I have e of xy. OK, and now what I do is that I multiply this guy. So I want to cancel one of those things, right? So what I'm going to-- so I'm going to take this guy, and I'm going to multiply it by e of x and take the difference. So I do times e of x, and then I take the sum of those two, and then those two terms are going to cancel. So then that tells me that b times e of x squared, plus the expectation of xy is equal to-- so this guy is the one that cancelled. Then I get this guy here, expectation of x times the expectation of y, plus the guy that remains here-- which is b times the expectation of x square. So here I have b expectation of x, the whole thing squared. And here I have b expectation of x square. So if I pull this guy here, what do I get? b times the variance of x, OK? So I'm going to move here. And this guy here, when I move this guy here, I get the expectation of x times y, minus the expectation of x times the expectation of y. So this is actually telling me that the covariance of x and y is equal to b times the variance of x. And so then that tells me that b is equal to covariance of xy divided by the variance of x. And that's why I actually need the variance of x to be non-zero because I couldn't do that otherwise. And because if it was, it would mean that b should be plus infinity, which is what the limit of this guy is when the variance goes to zero or negative infinity. I can not sort them out. All right, so I'm sorry about the mess, but that should be more clear. Then a, of course, you can write it by plugging in the value of b, so you know it's only a function of your distribution, right? So what are the characteristics of the distribution-- so distribution can have a bunch of things. It can have movements of order 4, of order 26. It can have heavy tails or light tails. But when you compute least squares, the only thing that matters are the variance of x, the expectation of the individual ones-- and really what captures how y changes when you change x, is captured in the covariance. The rest is really just normalization. It's just telling you, I want things to cross the y-axis at the right place. I want things to cross the x-axis at the right place. But the slope is really captured by how much more covariance you have relative to the variance of x. So this is essentially setting the scale for the x-axis, and this is telling you for a unit scale, this is the unit of y that you're changing. OK, so we have explicit forms. And what I could do, if I wanted to estimate those things, is just say, well again, we have expectations, right? The expectation of xy minus the product of the expectations, I could replace expectations by averages and get an empirical covariance just like we can replace the expectations for the variance and get a sample covariance. And this is basically what we're going to be doing. All right, this is essentially what you want. The problem is that if you view it that way, you sort of prevent yourself from being able to solve the multivariate problem. Because it's only in the univariate problem that you have closed form solutions for your problem. But if you actually go to multivariate, this is not where you want to replace expectations by averages. You actually want to replace expectation by averages here. And once you do it here, then you can actually just solve the minimisation problem. OK, so one thing that arises from this guy is that this is an interesting formula. All right, think about it. If I have that y is a plus bx plus some noise. Things are no longer on something. I have that y is equal to a bx plus some noise, which is usually denoted by epsilon. So that's the distribution, right? If I tell you the distribution of x, and I say y is a plus b epsilon-- I tell you the distribution of y, and if [? they mean ?] that those two are independent, you have a distribution on y. So what happens is that I can actually always say-- well, you know, this is equivalent to saying that epsilon is equal to y minus a plus bx, right? I can always write this as just-- I mean, as tautology. But here, for those guys-- this is not for any guy, right. This is really for the best fit, a and b, those ones that satisfy this gradient is equal to zero thing. Then what we had is that the expectation of epsilon was equal to expectation of y minus a plus b expectation of x by linearity of the expectation, which was equal to zero. So for this best fit we have zero. Now, the covariance between x and y-- Between, sorry, x and epsilon, is what? Well, it's the covariance between x-- and well, epsilon was y minus a plus bx. Now, the covariance is bilinear, so what I have is that the covariance of this is the covariance of xn times y-- sorry, of x and y, minus the variance-- well, minus a plus b, covariance of x and x, which is the variance of x? Covariance of xy minus a plus b variance of x. OK, I didn't write it. So here I have covariance of xy is equal to b variance of x, right? Covariance of xy. Yeah, that's because they cannot do that with the covariance. Yeah, I have those averages again. No, because this is centered, right? Sorry, this is centered, so this is actually equal to the expectation of x times y minus a plus bx. The covariance is equal to the product just because this insight is actually centered. So this is the expectation of x times y minus the expectation of a times the expectation of x, plus b minus b times the expectation of x squared. Well, actually maybe I should not really go too far. So this is actually the one that I need. But if I stop here, this is actually equal to zero, right. Those are the same equations. OK? Yeah? AUDIENCE: What are we doing right now? PHILIPPE RIGOLLET: So we're just saying that if I actually believe that this best fit was the one that gave me the right parameters, what would that imply on the noise itself, on this epsilon? So here we're actually just trying to find some necessary condition for the noise to hold-- for the noise. And so those conditions are, that first, the expectation is zero. That's what we've got here. And then, that the covariance between the noise and x has to be zero as well. OK, so those are actually conditions that the noise must satisfy. But the noise was just not really defined as noise itself. We were just saying, OK, if we're going to put some assumptions on the epsilon, what do we better have? So the first one is that it's centered, which is good, because otherwise, the noise would shift everything. So now when you look at a linear regression model-- typically, if you open a book, it doesn't start by saying, let the noise be the difference between y and what I actually want y to be. It says let y be a plus bx plus epsilon. So conversely, if we assume that this is the model that we have, then we're going to have to assume that epsilon-- we're going to assume that epsilon is centered, and that the covariance between x and epsilon is zero. Actually, often, we're going to assume much more. And one way to ensure that those two things are satisfied is to assume that x is independent of epsilon, for example. If you assume that x is independent of epsilon, of course the covariance is going to be zero. Or we might assume that the conditional expectation of epsilon, given x, is equal to zero, then that implies that. OK, now the fact that it's centered is one thing. So if we make this assumption, the only thing it's telling us is that those ab's that come-- right, we started from there. y is equal to a plus bx plus some epsilon for some a, for some b. What it turns out is that those a's and b's are actually the ones that you would get by solving this expectation of square thing. All right, so when you asked-- back when you were following-- so when you asked, you know, why don't we take the square, for example, or the power 4, or something like this-- then here, I'm saying, well, if I have y is equal to a plus bx, I don't actually need to put too much assumptions on epsilon. If epsilon is actually satisfying those two things, expectation is equal to zero and the covariance with x is equal to zero, then the right a and b that I'm looking for are actually the ones that come with the square-- not with power 4 or power 25. So those are actually pretty weak assumptions. If we want to do inference, we're going to have to assume slightly more. If we want to use T-distributions at some point, for example, and we will, we're going to have to assume that epsilon has a Gaussian distribution. So if you want to start doing more statistics beyond just like doing this least square thing, which is minimizing the square of criterion, you're actually going to have to put more assumptions. But right now, we did not need them. We only need that epsilon as mean zero and covariant zero with x. OK, so that was basically probabilistic, right. If I were to do probability and I were trying to model the relationship between two random variables, x and y, in the form y is a plus bx plus some noise, this is what would come out. Everything was expectations. There was no data involved. So now let's go to the data problem, which is now, I do not know what those expectations are. In particular, I don't know what the covariance of x and y is, and I don't know with the expectation of x and the expectation of y r. So I have data to do that. So how am I going to do this? Well, I'm just going to say, well, if I want x1, y1, xn, yn, and I'm going to assume that they're [? iid. ?] And I'm actually going to assume that they have some model, right. So I'm going to assume that I have that a-- so that Yi follows the same model. So epsilon i [? rad, ?] and I won't say that expectation of epsilon i is zero and covariance of xi, epsilon i is equal to zero. So I'm going to put the same model on all the data. So you can see that a is not ai, and b is not bi. It's the same. So as my data increases, I should be able to recover the correct things-- as the size of my data increases. OK, so this is what the statistical problem look like. You're given the points. There is a true line from which this point was generated, right. There was this line. There was a true ab that I use to draw this plot, and that was the line. So first I picked an x, say uniformly at on this intervals, 0 to 2. I said that was this one. Then I said well, I want y to be a plus bx, so it should be here, but then I'm going to add some noise epsilon to go away again back from this line. And that's actually me, here, we actually got two points correct on this line. So there's basically two epsilons that were small enough that the dots actually look like they're on the line. Everybody's clear about what I'm drawing? So now of course if you're a statistician, you don't see this. You only see this. And you have to recover this guy, and it's going to look like this. You're going to have an estimated line, which is the red one. And the blue line, which is the true one, the one that actually generated the data. And your question is, while this line corresponds to some parameters a hat and b hat, how could I make sure that those two lines-- how far those two lines are? And one to address this question is to say how far is a from a hat, and how far is b from b hat? OK? Another question, of course, that you may ask is, how do you find a hat and b hat? And as you can see, it's basically the same thing. Remember, what was a-- so b was the covariance between x and y divided by the variance of x, right? We check and rewrite this. The expectation of xy minus expectation of x times the expectation of y, divided by expectation of x squared minus expectation of x. The whole thing's-- OK? If you look at the expression for b hat, I basically replaced all the expectations by bars. So I said, well, this guy I'm going to estimate by an average. So that's the xy bar, and is 1 over n, [? sum ?] from [? i co ?] 1, to n of Xi, times Yi. x bar, of course, is just the one that we're used to. And same for y bar. X squared bar, the one that's here, is the average of the squares. And x bar square is the square of the average. OK, so you just basically replace this guy by x bar, this guy by y bar, this guy by x square bar, and this guy by x bar and no square. OK, so that's basically one way to do it. Everywhere you see an expectation, you replace it by an average. That's the usual statistical hammer. You can actually be slightly more subtle about this. And as an exercise, I invite you-- just to make sure that you know how to do this competition, it's going to be exactly the same kind of competitions that we've done. But as an exercise, you can check that if you actually look at say, well, what I wanted to minimize here, I had an expectation, right? And I said, let's minimize this thing. Well, let's replace this by an average first. And now minimize. OK, so if I do this, it turns out I'm going to actually get the same result. The minimum of the average is basically-- when I replace the average by-- sorry, when I replace the expectation by the average and then minimize, it's the same thing as first minimizing and then replacing expectation by averages in this case. Again, this is a much more general principle because if you don't have a closed form for the minimum like for some, say, likelihood problems, well, you might not actually have a possibility to just look at what the formula looks like-- see where the expectations show up-- and then just plug in the averages instead. So this is the one you want to keep in mind. And again, as an exercise. OK, so here, and then you do expectation replaced by averages. And then that's the same answer, and I encourage you to solve the exercise. OK, everybody's clear that this is actually the same expression for a hat and b hat that we had before that we had for a and b when we replaced the expectations by averages? Here, by the way, I minimize the sum rather than the average. It's clear to everyone that this is the same thing, right? Yep? AUDIENCE: [INAUDIBLE] sum replacing it [INAUDIBLE] minimize the expectation, I'm assuming it's switched with the derivative on the expectation [INAUDIBLE]. PHILIPPE RIGOLLET: So we did switch the derivative and the expectation before you came, I think. All right, so indeed, the picture was the one that we said, so visually, this is what we're doing. We're looking among all the lines. For each line, we compute this distance. So if I give you another line there would be another set of arrows. You look at their length. You square it. And then you sum it all, and you find the line that has the minimum sum of squared lengths of the arrows. All right, and those are the arrows that we're looking at. But again, you could actually think of other distances, and you would actually get different-- you could actually get different solutions, right. So there's something called, mean absolute deviation, which rather than minimizing this thing, is actually minimizing the sum from i to co 1 to n of the absolute value of y minus a plus bXi. And that's not something for which you're going to have a closed form, as you can imagine. You might have something that's sort of implicit, but you can actually still solve it numerically. And this is something that people also like to use but way, way less than the least squares one. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: What did I just what? AUDIENCE: [INAUDIBLE] The sum of the absolute values of Yi minus a plus bXi. So it's the same except I don't square here. OK? So arguably, you know, predicting a demand based on price is a fairly naive problem. Typically, what we have is a bunch of data that we've collected, and we're hoping that, together, they can help us do a better prediction. All right, so maybe I don't have only the price, but maybe I have a bunch of other social indicators. Maybe I know the competition, the price of the competition. And maybe I know a bunch of other things that are actually relevant. And so I'm trying to find a way to combine a bunch of points, a bunch of measures. There's a nice example that I like, which is people were trying to measure something related to your body mass index, so basically the volume of your-- the density of your body. And the way you can do this is by just, really, weighing someone and also putting them in some cubic meter of water and see how much overflows. And then you have both the volume and the mass of this person, and you can start computing density. But as you can imagine, you know, I would not personally like to go to a gym when the first thing they ask me is to just go in a bucket of water, and so people try to find ways to measure this based on other indicators that are much easier to measure. For example, I don't know, the length of my forearm, and the circumference of my head, and maybe my belly would probably be more appropriate here. And so you know, they just try to find something that actually makes sense. And so there's actually a nice example where you can show that if you measure-- I think one of the most significant was with the circumference of your wrist. This is actually a very good indicator of your body density. And it turns out that if you stuff all the bunch of things together, you might actually get a very good formula that explains things. All right, so what we're going to do is rather than saying we have only one x to explain y's, let's say we have 20 x's that we're trying to combine to explain y. And again, just like assuming something of the form, y is a plus b times x was the simplest thing we could do, here we're just going to assume that we have y is a plus b1, x1 plus b2, x2, plus b3, x3. And we can write it in a vector form by writing that Yi is Xi transposed b, which is now a vector plus epsilon i. OK, and here, on the board, I'm going to have a hard time doing boldface, but all these things are vectors except for y, which is a number. Yi is a number. It's always the value of my y-axis. So even if my x-axis lives on-- this is x1, and this is x2, y is really just the real valued function. And so I'm going to get a bunch of points, x1,y1, and I'm going to see how much they respond. So for example, my body density is y, and then all the x's are a bunch of other things. Agreed with that? So this is an equation that holds on the real line, but this guy here is an r p, and this guy's an rp. It's actually common to talk to call b, beta, when it's a vector, and that's the usual linear regression notation. Y is x beta plus epsilon. So x's are called explanatory variables. y is called explained variable, or dependent variable, or response variable. It has a bunch of names. You can use whatever you feel more comfortable with. It should actually be explicit, right, so that's all you care about. Now, what we typically do is that rather-- so you notice here, that there's actually no intercept. If I actually fold that back down to one dimension, there's actually a is equal to zero, right? If I go back to p is equal to 1, that would imply that Yi is, well, say, beta times x plus epsilon i. And that's not good, I want to have an intercept. And the way I do this, rather than writing a plus this, and you know, just have like an overload of notation, what I am actually doing is that I fold back. I fold my intercept back into my x. And so if I measure 20 variables, I'm going to create a 21st variable, which is always equal to 1. OK, so you should need to think of x as being 1. And then x1 xp. And sorry, xp minus 1, I guess. OK, and now this is an rp. I'm always going to assume that the first one is 1. I can always do that. If I have a table of data-- if my data is given to me in an Excel spreadsheet-- and here I have the density that I measured on my data, and then maybe here I have the height, and here I have the wrist circumference. And I have all these things. All I have to do is to create another column here of ones, and I just put 1-1-1-1-1. OK, that's all I have to do to create this guy. Agreed? And now my x is going to be just one of those rows. So that's this is Xi, this entire row. And this entry here is Yi. So now, for my noise coefficients, I'm still going to ask for the same thing except that here, the covariance is not between x-- between one random variable and another random variable. It's between a random vector and a random variable. OK, how do I measure the covariance between a vector and a random variable? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, so basically-- AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, I mean, the covariance vector is equal to 0 is the same thing as [INAUDIBLE] equal to zero, but yeah, this is basically thought of entry-wise. For each coordinate of x, I want that the covariance between epsilon and this coordinate of x is equal to 0. So I'm just asking this for all coordinates. Again, in most instances, we're going to think that epsilon is independent of x, and that's something we can understand without thinking about coordinates. Yep? AUDIENCE: [INAUDIBLE] like what if beta equals alpha [INAUDIBLE]? PHILIPPE RIGOLLET: I'm sorry, can you repeat the question? I didn't hear. AUDIENCE: Is this the parameter of beta, a parameter? PHILIPPE RIGOLLET: Yeah, beta is the parameter we're looking for, right. Just like it was the pair ab has become the whole vector of beta now. AUDIENCE: And what's [INAUDIBLE]?? PHILIPPE RIGOLLET: Well, can you think of an intercept of a function that take-- I mean, there is one actually. There's the one for which betas-- all the betas that don't correspond to the vector of all ones, so the intercept is really the weight that I put on this guy. That's the beta that's going to come to this guy, but we don't really talk about intercept. So if x lives in two dimensions, the way you want to think about this is you take a sheet of paper like that, so now I have points that live in three dimensions. So let's say one direction here is x1. This direction is x2, and this direction is y. And so what's going to happen is that I'm going to have my points that live in this three dimensional space. And what I'm trying to do when I'm trying to do a linear model for those guys-- when I assume a linear model. What I assume is that there's a plane in those three dimensions. So think of this guy as going everywhere, and there's a plane close to which all my points should be. That's what's happening in two dimensional. If you see higher dimensions then congratulations to you, but I can't. But you know, you can definitely formalize that fairly easily mathematically and just talk about vectors. So now here, if I talk about the least square error estimator, or just the least squares estimator of beta, it's simply the same thing as before. Just like we said-- so remember, you should think of as beta as being both the pair a b generalized. So we said, oh, we wanted to minimize the expectation of y minus a plus bx squared, right? Now, so that's in-- for p is equal to 1. Now for p lower than or equal to 2, we're just going to write it as y minus x transpose beta squared. OK, so I'm just trying to minimize this quantity. Of course, I don't have access to this, so what I'm going to do with them going to replace my expectation by an average. So here I'm using the notation t because beta is the true one, and I don't want you to just-- so here, I have a variable t that's just moving around. And so now I'm going to take the square of this thing. And when I minimize this over all t in rp, the arc min, the minimum is attained at beta hat, which is my estimator. OK? So if I want to actually compute-- yeah? AUDIENCE: I'm sorry, on the last slide did we require the expectation of [INAUDIBLE] to be zero? PHILIPPE RIGOLLET: You mean the previous slide? AUDIENCE: Yes. [INAUDIBLE] PHILIPPE RIGOLLET: So again, I'm just defining an estimator just like I would tell you, just take the estimator that has coordinates for everywhere. AUDIENCE: So I'm saying like [? in that sign, ?] we'll say the noise [? terms ?] we want to satisfy the covariance of that [? side. ?] We also want them to satisfy expectation of each [? noise turn ?] zero? PHILIPPE RIGOLLET: And so the answer is yes. I was just trying to think if this was captured. So it is not captured in this guy because this is just telling me that the expectation of epsilon i minus expectation of some i is equal to zero. OK, so yes I need to have that epsilon has mean zero-- let's assume that expectation of epsilon is zero for this problem. And we're going to need something about some sort of question about the variance being not equal to zero, right, but this is going to come up later. So let's think for one second about doing the same approach as we did before. Take the partial derivative with respect to the first coordinate of t, with respect to the second coordinate of t, with respect to the third coordinate of t, et cetera. So that's what we did before. We had two equations, and we reconciled them because it was fairly easy to solve, right? But in general, what's going to happen is we're going to have a system of equations. We're going to have a system of p equations, one for each of the coordinates of t. And we're going to have p unknowns, each coordinate of t. And so we're going to have the system to solve-- actually, i turns out it's going to be a linear system. But it's not going to be something that we're going to be able to solve coordinate by coordinate. It's going to be annoying to solve. You know, you can guess that what's going to happen, right. Here, it involved the covariance between x and epsilon, right. That's what it involved to understand-- sorry, the correlation between x and y to understand how the solution of this problem was. In this case, there's going to be only the covariance between x1 and y, x2 and y, x3, et cetera, all the way to xp and y. There's also going to be all the cross covariances between xj and xk. And so this is going to be a nightmare to solve, like, in this system. And what we do is that we go on to using a matrix notation, so that when we take derivatives, we talk about gradients, and then we can invert matrices and solve linear systems in a somewhat formal manner by just saying that, if I want to solve the system ax equals b-- rather than actually solving this for each coordinate of x individually, I just say that x is equal to a inverse times. So that's really why we're going to the equation one, because we have a formalism to write that x is the solution of the system. I'm not telling you that this is going to be easy to solve numerically, but at least I can write it. And so here's how it goes. I have a bunch of vectors. So what are my vectors, right? So I have x1-- oh, by the way, I didn't actually mention that when I put the lowercase, when I put the subscript, I'm talking about the observation. And when I put the superscript, I'm talking about the coordinates, right? So I have x1, which is equal to x1, x1 [? 1, ?] x 1p, x2, which is 1. x2, 1, x2 p, all the way to xn, which is 1, xn 1, x np. All right, so those are n observed x's, and then I have another y1, y2, yn, that comes paired with those guys. OK? So the first thing is that I'm going to stack those guys into some vector that I'm going to call y. So maybe I should put an arrow for the purpose of the blackboard, and it's just y1 to yn. OK, so this is a vector in rn. Now, if I want to stack those guys together, I can either create a long vector of size n times p, but the problem is that I lose the role of who's a coordinate and who's an observation. And so it's actually nicer for me to just put those guys next to each other and create one new variable. And so the way I'm going to do this is-- rather than actually stacking those guys like that, I'm getting their transpose and stack them as rows of a matrix. OK, so I'm going to create a matrix, which here is denoted typically by-- I'm going to write x double bar. And here, I'm going to actually just-- so since I'm taking those guys like this, the first column is going to be only ones. And then I'm going to have-- well, x1, 1, [? 1, ?] x1, p. And here, I'm going to have x n1, x np. OK, so here the number of rows is n, and the number of columns is p. One row per observation, one column per coordinate. And again, I make your life miserable because this really should be p minus 1 because I already used the first one for this guy. I'm sorry about that. It's a bit painful. So usually we don't even write what's in there. So we don't have to think about it. Those are just vectors of size p. OK? So now that I created this thing, I can actually just basically stack up all my models. So Yi equals Xi transpose beta plus epsilon i for all i equal 1 to n. This transforms into-- this is equivalent to saying that the vector y is equal to the matrix x times beta plus a matrix, plus a vector epsilon, where epsilon is just epsilon 1 to epsilon n, right. So I have just this system, which I write as a matrix, which really just consists in stacking up all these equations next to each other. So now that I have this model-- this is the usual least squares model. And here, when I want to write my least squares criterion in terms of matrices, right? My least squares criterion, remember, was sum from i equal 1 to n of Yi minus Xi transposed beta squared. Well, here it's really just the sum of the square of the coefficients of the vector y minus x beta. So this is actually equal to the norm squared of y minus x beta square. That's just the square. Norm is, by definition, the sum of the square of the coordinates. And so now I can actually talk about minimizing a norm squared, and here it's going to be easier for me to take derivatives. All right, so we'll do that next time.
MIT_18650_Statistics_for_Applications_Fall_2016
6_Maximum_Likelihood_Estimation_cont_and_the_Method_of_Moments.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: So welcome back. We're going to finish this chapter on maximum likelihood estimation. And last time, I briefly mentioned something that was called Fisher information. So Fisher information, in general, is actually a matrix when you have a multivariate parameter theta. So if theta, for example, is of dimension d, then the Fisher information matrix is going to be a d by d matrix. You can see that, because it's the outer product. So it's of the form gradient gradient transpose. So if it's gradient gradient transpose, the gradient is the d dimensional. And so gradient times gradient transpose is a d by d matrix. And so this matrix actually contains-- well, tells you it's called Fisher information matrix. So it's basically telling you how much information about the theta is in your model. So for example, if your model is very well-parameterized, then you will have a lot of information. You will have a higher-- so let's think of it as being a scalar number, just one number now-- so you're going to have a larger information about your parameter in the same probability distribution. But if start having a weird way to parameterize your model, then the Fisher information is actually going to drop. So as a concrete example think of, for example, a parameter of interest in a Gaussian model, where the mean is known to be zero. But what you're interested in is the variance, sigma squared. If I'm interested in sigma square, I could parameterize my model by sigma, sigma squared, sigma to the fourth, sigma to 24th. I could parameterize it by whatever I want, then I would have a simple transformation. Then you could say that some of them are actually more or less informative, and you're going to have different values for your Fisher information. So let's just review a few well-known computations. So I will focus primarily on the one dimensional case as usual. And I claim that there's two definitions. So if theta is a real valued parameter, then there's basically two definitions that you can think of for your Fisher information. One involves the first derivative of your log likelihood. And the second one involves the second derivative. So the log likelihood here, we're actually going to define it as l of theta. And what is it? Well, it's simply the likelihood function for one observation. So it's l-- and I'm going to write 1 just to make sure that we all know what we're talking about one observation-- of-- which is the order again, I think it's X and theta. So that's the log likelihood, remember? So for example, if I have a density, what is it going to be? It's going to be log of f sub theta of X. So this guy is a random variable, because it's a function of a random variable. And that's what you see expectations of this thing. It's a random function of theta. If I view this as a function of theta, the function becomes random, because it depends on this random X. And so I of theta is actually defined as the variance of l prime of theta-- so the variance of the derivative of this function. And I also claim that it's equal to negative the expectation of the second derivative of theta. And here, the expectation and the variance are computed, because this function, remember, is random. So I need to tell you what is the distribution of the X with respect to which I'm computing the expectation and the variance. And it's the theta itself. So typically, the theta we're going to be interested in-- so there's a Fisher information for all values of the parameter, but the one typically we're interested in is the true parameter, theta star. But view this as a function of theta right now. So now, I need to prove to you-- and this is not a trivial statement-- the variance of the derivative is equal to negative the expectation of the second derivative. I mean, there's really quite a bit that comes into this right. And it comes from the fact that this is a log not of anything. It's the log of a density. So let's just prove that without having to bother too much ourselves with some technical assumptions. And the technical assumptions are the assumptions that allow me to permute derivative and integral. Because when I compute the variances and expectations, I'm actually integrating against the density. And what I want to do is to make sure that I can always do that. So my technical assumptions are I can always permute integral and derivatives. So let's just prove this. So what I'm going to do is I'm going to assume that X has density f theta. And I'm actually just going to write-- well, let me write it f theta right now. Let me try to not be lazy about writing. And so the thing I'm going to use is the fact that the integral of this density is equal to what? 1. And this is where I'm going to start doing weird things. That means that if I take the derivative of this guy, it's equal to 0. So that means that if I look at the derivative with respect to theta of integral f theta of X dX, this is equal to 0. And this is where I'm actually making this switch, is that I'm going to say that this is actually equal to the integral of the derivative. So that's going to be the first thing I'm going to use. And of course, if it's true for the first derivative, it's going to be true for the second derivative. So I'm going to actually do it a second time. And the second thing I'm going to use is the fact the integral of the second derivative is equal to 0. So let's start from here. And let me start from, say, the expectation of the second derivative of l prime theta. So what is l prime prime theta? Well, it's the second derivative of log of f theta of X. And we know that the derivative of the log-- sorry-- yeah, so the derivative of the log is 1 over-- well, it's the derivative of f divided by f itself. Everybody's with me? Just log of f prime is f prime over f. Here, it's just that f, I view this as a function of theta and not as a function of X. So now, I need to take another derivative of this thing. So that's going to be equal to-- well, so we all know the formula for the derivative of the ratio. So I pick up the second derivative times f theta minus the first derivative squared divided by f theta squared-- basic calculus. And now, I need to check that negative the expectation of this guy is giving me back what I want. Well what is negative the expectation of l prime prime of theta? Well, what we need to do is to do negative integral of this guy against f theta. So it's minus the integral of-- That's just the definition of the expectation. I take an integral against f theta. But here, I have something nice. What's happening is that those guys are canceling. And now that those guys are canceling, those guys are canceling too. So what I have is that the first term-- I'm going to break this difference here. So I'm going to say that integral of this difference is the difference of the integrals. So the first term is going to be the integral of d over d theta squared of f theta. And the second one, the negative signs are going to cancel, and I'm going to be left with this. Everybody's following? Anybody found the mistake? How about the other mistake? I don't know if there's a mistake. I'm just trying to get you to check what I'm doing. With me so far? So this guy here is the integral of the second the derivative of f of X dX. What is this? AUDIENCE: It's 0. PHILIPPE RIGOLLET: It's 0. And that's because of this guy, which I will call frowny face. So frowny face tells me this. And let's call this guy monkey that hides his eyes. No, let's just do something simpler. Let's call it star. And this guy, we will use later on. So now, I have to prove that this guy, which I have proved is equal to this, is now equal to the variance of l prime theta. So now, let's go back to the other way. We're going to meet halfway. I'm going to have a series-- I want to prove that this guy is equal to this guy. And I'm going to have a series of equalities that I'm going to meet halfway. So let's start from the other end. We started from the negative l prime prime theta. Let's start with the variance part. Variance of l prime of theta, so that's the variance-- so that's the expectation of l prime of theta squared minus the square of the expectation of l prime of theta. Now, what is the square of the expectation of l prime of theta? Well, l prime of theta is equal to the partial with respect to theta of log f theta of X, which we know from the first line over there-- that's what's in the bracket on the second line there-- is actually equal to the partial over theta of f theta X divided by f theta X. That's the derivative of the log. So when I look at the expectation of this guy, I'm going to have the integral of this against f theta. And the f thetas are going to cancel again, just like I did here. So this thing is actually equal to the integral of partial over theta of f theta of X dX. And what does this equal to? 0, by the monkey hiding is eyes. So that's star-- tells me that this is equal to 0. So basically, when I compute the variance, this term is not. Going to matter. I only have to complete the first one. So what is the first one? Well, the first one is the expectation of l prime squared. And so that guy is the integral of-- well, what is l prime? Again, it's partial over partial theta f theta of X divided by f theta of X. Now, this time, this guy is squared against the density. So one of the f thetas cancel. But now, I'm back to what I had before for this guy. So this guy is now equal to this guy. There's just the same formula. So they're the same thing. And so I've moved both ways. Starting from the expression that involves the expectation of the second derivative, I've come to this guy. And starting from the expression that tells me about the variance of the first derivative, I've come to the same guy. So that completes my proof. Are there any questions about the proof? We also have on our way found an explicit formula for the Fisher information as well. So now that I have this thing, I could actually add that if X has a density, for example, this is also equal to the integral of-- well, the partial over theta of f theta of X squared divided by f theta of X, because I just proved that those two things were actually equal to the same thing, which was this guy. Now in practice, this is really going to be the useful one. The other two are going to be useful depending on what case you're in. So if I ask you to compute the Fisher information, you have now three ways to pick from. And basically, practice will tell you which one to choose if you want to save five minutes when you're doing your computations. Maybe you're the guy who likes to take derivatives. And then you're going to go with the second derivative one. Maybe you're the guy who likes to extend squares, so you're going to take the one that involves the square of the squared prime. And maybe you're just a normal person, and you want to use that guy. Why do I care? This is the Fisher information. And I could have defined the [? Hilbert ?] information by taking the square root of this guy plus the sine of this thing and just be super happy and have my name in textbooks. But this thing has a very particular meaning. When we're doing the maximum likelihood estimation-- so remember the maximum likelihood estimation is just an empirical version of trying to minimize the KL divergence. So what we're trying to do, maximum likelihood, is really trying to minimize the KL divergence. And we're trying to minimize this function, remember? So now what we're going to do is we're going to plot this function. We said that, let's place ourselves in cases where this KL is convex, so that the inverse is concave. So it's going to look like this-- U-shaped, that's convex. So that's the truth thing I'm trying to minimize. And what I said is that I'm going to actually try to estimate this guy. So in practice, I'm going to have something that looks like this, but it's not really this. And we're not going to do this, but you can show that you can control this uniformly over the entire space, that there is no space where it just becomes huge. In particular, this is not the space where it just becomes super huge, and the minimum of the dotted line becomes really far from this guy. So if those two functions are close to each other, then this implies that the minimum here of the dotted line is close to the minimum of the solid line. So we know that this is theta star. And this is our MLE, estimator, theta hat ml. So that's basically the principle-- the more data we have, the closer the dotted line is to the solid line. And so the minimum is closer to the minimum. But now, this is just one example, where I drew a picture for you. But there could be some really nasty examples. Think of this example, where I have a function, which is convex, but it looks more like this. That's convex, it's U-shaped. It's just a professional U. Now, I'm going to put a dotted line around it that has pretty much the same fluctuations. The bend around it is of this size. So do we agree that the distance between the solid line and the dotted line is pretty much the same in those two pictures? Now, here, depending on how I tilt this guy, basically, I can put the minimum theta star wherever I want. And let's say that here, I actually put it here. That's pretty much the minimum of this line. And now, the minimum of the dotted line is this guy. So they're very far. The fact that I'm very flat at the bottom makes my requirements for being close to the U-shaped solid curve much more stringent, if I want to stay close. And so this is the canonical case. This is the annoying case. And of course, you have the awesome case-- looks like this. And then whether you deviate, you can have something that moves pretty far. It doesn't matter, it's always going to stay close. Now, what is the quantity that measures how curved I am at a given point-- how curved the function is at a given point? The secondary derivative. And so the Fisher information is negative the second derivative. Why the negative? Well here-- Yeah, we're looking for a minimum, and this guy is really the-- you should view this as a reverted function. This is we're trying to maximize the likelihood, which is basically maximizing the negative KL. So the picture I'm showing you is trying to minimize the KL. So the truth picture that you should see for this guy is the same, except that it's just flipped over. But the curvature is the same, whether I flip my sheet or not. So it's the same thing. So apart from this negative sign, which is just coming from the fact that we're maximizing instead of minimizing, this is just telling me how curved my likelihood is around the maximum. And therefore, it's actually telling me how good, how robust my maximum likelihood estimator is. It's going to tell me how close, actually, my likelihood estimator is going to be-- maximum likelihood is going to be to the true parameter. So I should be able to see that somewhere. There should be some statement that tells me that this Fisher information will play a role when assessing the precision of this estimator. And remember, how do we characterize a good estimator? Well, we look at its bias, or we look its variance. And we can combine the two and form the quadratic risk. So essentially, what we're going to try to say is that one of those guys-- either the bias or the variance or the quadratic risk-- is going to be worse if my function is flatter, meaning that my Fisher information is smaller. And this is exactly the point of this last theorem. So let's look at a couple of conditions. So this is your typical 1950s statistics paper that has like one page of assumptions. And this was like that in the early days, because people were trying to make theories that would be valid for as many models as possible. And now, people are sort of abusing this, and they're just making all this lists of assumptions so that their particular method works for their particular problem, because they just want to take shortcuts. But really, the maximum likelihood estimator is basically as old as modern statistics. And so this was really necessary conditions. And we'll just parse that. The model is identified. Well, better be, because I'm trying to estimate theta and not P theta. So this one is good. For all theta, the support of P theta does not depend on theta. So that's just something that we need to have. Otherwise, things become really messy. And in particular, I'm not going to be able to define likelihood-- Kullback-Leibler divergences. Then why can I not do that? Well, because the Kullback-Leibler divergence has a log of the ratio of two densities. And if one of the support is changing with theta is it might be they have the log of something that's 0 or something that's not 0. And the log of 0 is a slightly annoying quantity to play with. And so we're just removing that case. Nothing depends on theta-- think of it as being basically the entire real line as a support for Gaussian, for example. Theta star is not on the boundary of theta. Can anybody tell me why this is important? We're talking about derivatives. So when I want to talk about derivatives, I'm talking about fluctuations around a certain point. And if I'm at the boundary, it's actually really annoying. I might have the derivative-- remember, I give you this example-- where the maximum likelihood is just obtained at the boundary, because the function cannot grow anymore at the boundary. But it does not mean that the first order derivative is equal to 0. It does not mean anything. So all this picture here is valid only if I'm actually achieving the minimum inside. Because if my theta space stops here and it's just this guy, I'm going to be here. And there's no questions about curvature or anything that comes into play. It's completely different. So here, it's inside. Again, think of theta as being the entire real line. Then everything is inside. I is invertible. What does it mean for a positive number, a 1 by 1 matrix to be invertible? Yep. AUDIENCE: It'd be equal to its [INAUDIBLE].. PHILIPPE RIGOLLET: A 1 by 1 matrix, that's a number, right? What is a characteristic-- if I give you a matrix with numbers and ask you if it's invertible, what are you going to do with it? AUDIENCE: Check if the determinant is 0. PHILIPPE RIGOLLET: Check if the determinant is 0. What is the determinant of the 1 by 1 matrix? It's just the number itself. So that's basically, you want to check if this number is 0 or not. So we're going to think in the one dimensional case here. And in the one dimensional case, that just means that the curvature is not 0. Well, it better be not 0, because then I'm going to have no guarantees. If I'm totally flat, if I have no curvature, I'm basically totally flat at the bottom. And then I'm going to get nasty things. Now, this is not true. I could have the curvature which grows like-- so here, it's basically-- the second derivative is telling me-- if I do the Taylor expansion, it's telling me how I grow as a function of, say, x squared. It's the quadratic term that I'm controlling. It could be that this guy is 0, and then the term of order, x to the fourth, is picking up. That could be the first one that's non-zero. But that would mean that my rate of convergence would not be square root of n. When I'm actually playing central limit theorem, it would become n to the 1/4th. And if I have all a bunch of 0 until the 16th order, I would have n to the 1/16th, because that's really telling me how flat I am. So we're going to focus on the case where it's only quadratic terms, and the rates of the central limit theorems kick in. And then a few other technical conditions-- we just used a couple of them. So I permuted limit and integral. And you can check that really what you want is that the integral of a derivative is equal to 0. Well, it just means that the values at the two ends are actually the same. So those are slightly different things. So now, what we have is that the maximum likelihood estimator has the following two properties. The first one, if I were to say that in words, what would I say, that theta hat is-- Is what? Yeah, that's what I would say when I-- that's for mathematicians. But if I'm a statistician, what am I going to say? It's consistent. It's a consistent estimator of theta star. It converges in probability to theta star. And then we have this sort of central limit theorem statement. The central limit theorem statement tells me that if this was an average and I remove the expectation of the average-- let's say it's 0, for example-- then square root of n times the average blah goes through some normal distribution. This is telling me that this is actually true, even if theta hat has nothing to do with an average. That's remarkable. Theta hat might not even have a closed form, and I'm still having basically the same properties as an average that would be given to me by a central limit theorem. And what is the asymptotic variance? So that's the variance in the n. So here, I'm thinking of having those guys being multivariate. And so I have the inverse of the covariance matrix that shows up as the variance-covariance matrix asymptotically. But if you think of just being a one dimensional parameter, it's one over this Fisher information, one over the curvature. So the curvature is really flat, the variance becomes really big. If the function is really flat, curvature is low, variance is big. If the curvature is very high, the variance becomes very low. And so that illustrates everything that's happening with the pictures that we have. And if you look, what's amazing here, there is no square root 2 pi, there's no fudge factors going on here. This is the asymptotic variance, nothing else. It's all in there, all in the curvature. Are there any questions about this? So you can see here that theta star is the true parameter. And the information matrix is evaluated at theta star. That's the point that matters. When I drew this picture, the point that was at the very bottom was always theta star. It's the one that minimizes the KL divergence, am as long as I'm identified. Yes. AUDIENCE: So the higher the curvature, the higher the inverse of the Fisher information? PHILIPPE RIGOLLET: No, the higher the Fisher information itself. So the inverse is going to be smaller. So small variance is good. So now what it means, actually, if I look at what is the quadratic risk of this guy, asymptotically-- what is asymptotic quadratic risk? Well, it's 0 actually. But if I assume that this thing is true, that this thing is pretty much Gaussian, if I look at the quadratic risk, well, it's the expectation of the square of this thing. And so it's going to scale like the variance divided by n. The bias goes to 0, just by this. And then the quadratic risk is going to scale like one over Fisher information divided by n. So here, the-- I'm not mentioning the constants. There must be constants, because everything is asymptotic. So for each finite n, I'm going to have some constants that show up. Everybody just got their mind blown by this amazing theorem? So I mean, if you think about it, the MLE can be anything. I'm sorry to say to you, in many instances, the MLE is just going to be an average, which is just going to be slightly annoying. But there are some cases where it's not. And we have to resort to this theorem rather than actually resorting to the central limit theorem to prove this thing. And more importantly, even if this was an average, you don't have to even know how to compute the covariance matrix-- sorry, the variance of this thing to plug it into the central limit theorem. I'm telling you, it's actually given by the Fisher information matrix. So if it's an average, between you and me, you probably want to go the central limit theorem route if you want to prove this kind of stuff. But if it's not, then that's your best shot. But you have to check those conditions. I will give you for granted the 0.5. Ready? Any questions? We're going to wrap up this chapter four. So if you have questions, that's the time. Yes. AUDIENCE: What was the quadratic risk up there? PHILIPPE RIGOLLET: You mean the definition? AUDIENCE: No, the-- what is was for this. PHILIPPE RIGOLLET: Well, you see the quadratic risk, if I think of it as being one dimensional, the quadratic risk is the expectation of the square of the difference between theta hat and theta star. So that means that if I think of this as having a normal 0, 1, that's basically computing the expectation of the square of this Gaussian divided by n. I just divided by square root of n on both sides. So it's the expectation of the square of this Gaussian. The Gaussian is mean 0, so the expectation of the square is just a variance. And so I'm left with 1 over the Fisher information divided by n. AUDIENCE: I see. OK. PHILIPPE RIGOLLET: So let's move on to chapter four. And this is the method of moments. So the method of moments is actually maybe a bit older than maximum likelihood. And maximum likelihood is dated, say, early 20th century, I mean as a systematic thing, because as I said, many of those guys are going to be averages. So finding an average is probably a little older. The method of moments, there's some really nice uses. There's a paper by Pearson in 1904, I believe, or maybe 1894. I don't know. And this paper, he was actually studying some species of crab in an island, and he was trying to make some measurements. That's how he came up with this model of mixtures of Gaussians, because there was actually two different populations in this populations of crab. And the way he actually fitted the parameters was by doing the method of moments, except that since there were a lot of parameters, he actually had to basically solve six equations with six unknowns. And that was a complete nightmare. And the guy did it by hand. And we don't know how he did it actually. But that is pretty impressive. So I want to start-- and this first part is a little brutal. But this is a Course 18 class, and I do not want to give you-- So let's all agree that this course might be slightly more challenging than AP statistics. And that means that it's going to be challenging just during class. I'm not going to ask you about the Weierstrass Approximation Theorem during the exams. But what I want is to give you mathematical motivations for what we're doing. And I can promise you that maybe you will have a slightly higher body temperature during the lecture, but you will come out smarter of this class. And I'm trying to motivate to you for using mathematical tool and show you where interesting mathematical things that you might find dry elsewhere actually work very beautifully in the stats literature. And one that we saw was using Kullback-Leibler divergence out of motivation for maximum likelihood estimation, for example. So the Weierstrass Approximation Theorem is something that comes from pure analysis. So maybe-- I mean, it took me a while before I saw that. And essentially, what it's telling you is that if you look at a function that is continuous on an interval a, b-- on a segment a, b-- then you can actually approximate it uniformly well by polynomials as long as you're willing to take the degree of this polynomials large enough. So the formal statement is, for any epsilon, there exists the d that depends on epsilon in a1 to ad-- so if you insist on having an accuracy which is 1/10,000, maybe you're going to need a polynomial of degree 100,000, who knows. It doesn't tell you anything about this. But it's telling you that at least you have only a finite number of parameters to approximate those functions that typically require an infinite number of parameters to be described. So that's actually quite nice. And that's the basis for many things and many polynomial methods typically. And so here, it's uniform, so there's this max over x that shows up that's actually nice as well. That's Weierstrass Approximation Theorem. Why is that useful to us? Well, in statistics, I have a sample of X1 to Xn. I have, say, a unified statistical model. I'm not always going to remind you that it's identified-- not unified-- identified statistical model. And I'm going to assume that it has a density. You could think of it as having a PMF, but think of it as having a density for one second. Now, what I want is to find the distribution. I want to find theta. And finding theta, since it's identified as equivalent to finding P theta, which is equivalent to finding f theta, and knowing a function is the same-- knowing a density is the same as knowing a density against any test function h. So that means that if I want to make sure I know a density-- if I want to check if two densities are the same, all I have to do is to compute their integral against all bounded continuous functions. You already know that it would be true if I checked for all functions h. But since f is a density, I can actually look only at functions h that are bounded, say, between minus 1 and 1, and that are continuous. That's enough. Agreed? Well, just trust me on this. Yes, you have a question? AUDIENCE: Why is this-- like, why shouldn't you just say that [INAUDIBLE]? PHILIPPE RIGOLLET: Yeah, I can do that. I'm just finding a characterization that's going to be useful for me later on. I can find a bunch of them. But here, this one is going to be useful. So all I need to say is that f theta star integrated against X, h of x-- so this implies that f-- if theta is equal to f theta star not everywhere, but almost everywhere. And that's only true if I guarantee to you that f theta and f theta stars are densities. This is not true for any function. So now, that means that, well, if I wanted to estimate theta hat, all I would have to do is to compute the average, right-- so this guy here, the integral-- let me clean up a bit my board. So my goal is to find theta such that, if I look at f theta and now I integrate it against h of x, then this gives me the same thing as if I were to do it against f theta star. And I want this for any h, which is continuous and bounded. So of course, I don't know what this quantity is. It depends on my unknown theta star. But I have theta from this. And I'm going to do the usual-- the good old statistical trick, which is, well, this I can write as the expectation with respect to P theta star of h theta of x. That's just the integral of a function against something. And so what I can do is say, well, now I don't know this guy. But my good old trick from the book is replace expectations by averages. And what I get-- And that's approximately by the law of large numbers. So if I can actually find a function f theta such that when I integrate it against h it gives me pretty much the average of the evaluations of h over my data points for all h, then that should be a good candidate. The problem is that's a lot of functions to try. Even if we reduced that from all possible functions to bounded and continuous ones, that's still a pretty large infinite number of them. And so what we can do is to use our Weierstrass Approximation Theorem. And it says, well, maybe I don't need to test it against all h. Maybe the polynomials are enough for me. So what I'm going to do is I'm going to look only at functions h that are of the form sum of ak-- so h of x is sum of ak X to the k-th for k equals 0 to d-- only polynomials of degree d. So when I look at the average of my h's, I'm going to get a term like the first one. So the first one here, this guy, becomes 1/n sum from i equal 1 to n sum from k equal 0 to d of ak Xi to the k-th. That's just the average of the values of h of Xi. And now, what I need to do is to check that it's the same thing when I integrate h of this form as well. I want this to hold for all polynomials of degree d. That's still a lot of them. There's still an infinite number of polynomials, because there's an infinite number of numbers a0 to ad that describe those polynomials. But since those guys are polynomials, it's actually enough for me to look only at the terms of the form X to the k-th-- no linear combination, no nothing. So actually, it's enough to look only at h of x, which is equal to X to the k-th for k equal 0 to d. And now, how many of those guys are there? Just d plus 1, 0 to d. So that's actually a much easier thing for me to solve. Now, this quantity, which is the integral of f theta against X to the k-th-- so that the expectation of X to the k-th here-- it's called moment of order k, or k-th moment of P theta. That's a moment. A moment is just the expectation of the power. The mean is which moment? The first moment. And variance is not exactly the second moment. It's the second moment minus the first moment squared. That's the variance. It's E of X squared minus E of X squared. So those are things that you already know. And then you can go higher. You can go to E of X cube, E of X blah, blah. Here, I say go to E of X to the d-th. Now, as you can see, this is not something you can really put in action right now, because the Weierstrass Approximation Theorem does not tell you what d should be. Actually, we totally lost track of the epsilon I was even looking for. I just said approximately equal, approximately equal. And so all this thing is really just motivation. But it's essentially telling you that if you go to d large enough, technically you should be able to identify exactly your distribution up to epsilon. So I should be pretty good, if I go to d large enough. Now in practice, actually there should be much less than arbitrarily large d. Typically, we are going to need d which is 1 or 2. So there are some limitations to the Weierstrass Approximation Theorem. And there's a few. The first one is that it only works for continuous functions, which is not so much of a problem. That can be fixed. Well, we need bounded continuous functions. It works only on intervals. That's annoying, because we're going to have random variables that are defined beyond intervals. So we need something that just goes beyond the intervals. And you can imagine that if you let your functions be huge, it's going to be very hard for you to have a polynomial approximately [INAUDIBLE] well. Things are going to start going up and down at the boundary, and it's going to be very hard. And again, as I said several times, it doesn't tell us what d should be. And as statisticians, we're looking for methods, not like principles of existence of a method that exists. So if E is discrete, I can actually get a handle on this d. If E is discrete and actually finite-- I'm going to actually look at a finite E, meaning that I have a PMF on, say, r possible values, x1 and xr. My random variable, capital X, can take only r possible values. Let's think of them as being the integer numbers 1 to r. That's the number of success out of r trials that I get, for example. Binomial rp, that's exactly something like this. So now, clearly this entire distribution is defined by the PMF, which gives me exactly r numbers. So it can completely describe this distribution with r numbers. The question is, do I have an enormous amount of redundancy if I try to describe this distribution using moments? It might be that I need-- say, r is equal to 10, maybe I have only 10 numbers to describe this thing, but I actually need to compute moments up to the order of 100 before I actually recover entirely the distribution. Maybe I need to go infinite. Maybe the Weierstrass Theorem is the only thing that actually saves me here. And I just cannot recover it exactly. I can go to epsilon if I'm willing to go to higher and higher polynomials. Oh, by the way, in the Weierstrass Approximation Theorem, I can promise you that as epsilon goes to 0, d goes to infinity. So now, really I don't even have actually r parameters. I have only r minus parameter, because the last one-- because they sum up to 1. So the last one I can always get by doing 1 minus the sum of the first r minus 1 first. Agreed? So each distribution r numbers is described by r minus 1 parameters. The question is, can I use only r minus moments to describe this guy? This is something called Gaussian quadrature. The Gaussian quadrature tells you, yes, moments are actually a good way to reparametrize your distribution in the sense that if I give you the moments or if I give you the probability mass function, I'm basically giving you exactly the same information. You can recover all the probabilities from there. So here, I'm going to denote by-- I'm going to drop the notation in theta. I don't have theta. Here, I'm talking about any generic distribution. And so I'm going to call mk the k-th moment. And I have a PMF, this is really the sum for j equals 1 to r of xj to the k-th times p of xj. And this is the PMF. So that's my k-th moment. So the k-th moment is a linear combination of the numbers that I am interested in. So that's one equation. And I have as many equations as I'm actually willing to look at moments. So if I'm looking at 25 moments, I have 25 equations. m1 equals blah with this to the power of 1, m2 equals blah with this to the power of 2, et cetera. And then I also have the equation that 1 is equal to the sum of the p of xj. That's just the definition of PMF. So this is r's. They're ugly, but those are r's. So now, this is a system of linear equations in p, and I can actually write it in its canonical form, which is of the form a matrix of those guys times my parameters of interest is equal to a right hand side. The right hand side is the moments. That means, if I did you the moments, can you come back and find what the PMF, because we know already from probability that the PMF is all I need to know to fully describe my distribution. Given the moments, that's unclear. Now, here, I'm going to actually take exactly r minus 1 moment and this extra condition that the sum of those guys should be 1. So that gives me r equations based on r minus 1 moments. And how many unknowns do I have? Well, I have my r unknown parameters for the PMF, the r values of the PMF. Now, of course, this is going to play a huge role in whether the are many p's that give me the same. The goal is to find if there are several p's that can give me the same moments. But if there's only one p that can give me a set of moment, that means that I have a one-to-one correspondence between PMF and moments. And so if you give me the moments, I can just go back to the PMF. Now, how do I go back? Well, by inverting this matrix. If I multiply this matrix by its inverse, I'm going to get the identity times the vector of p's equal the inverse of the matrix times the m's. So what we want to do is to say that p is equal to the inverse of this big matrix times the moments that you give me. And if I can actually talk about the inverse, then I have basically a one-to-one mapping between the m's, the moments, and the matrix. So what I need to show is that this matrix is invertible. And we just decided that the way to check if a matrix is invertible is by computing its determinant. Who has computed a determinant before? Who was supposed to compute a determinant at least than just to say, no, you know how to do it. So you know how to compute determinants. And if you've seen any determinant in class, there's one that shows up in the exercises that professors love. And it's called the Vandermonde determinant. And it's the determinant of a matrix that have a very specific form. It looks like-- so there's basically only r parameters to this r by r matrix. The first row, or the first column-- sometimes, it's presented like that-- is this vector where each entry is to the power of 1. And the second one is each entry is to the power of 2, and to the power of 3, and to the power 4, et cetera. So that's exactly what we have-- x1 to the first, x2 to the first, all the way to xr to the first, and then same thing to the power of 2, all the way to the last one. And I also need to add the row of all 1's, which you can think of those guys are to the power of 0, if you want. So I should really put it on top, if I wanted to have a nice ordering. So that was the matrix that I had. And I'm not asking you to check it. You can prove that by induction actually, typically by doing the usual let's eliminate some rows and columns type of tricks that you do for matrices. So you basically start from the whole matrix. And then you move onto a matrix that has only one 1's and then 0's here. And then you have Vandermonde that's just slightly smaller. And then you just iterate. Yeah. AUDIENCE: I feel like there's a loss to either the supra index, or the sub index should have a k somewhere [INAUDIBLE].. [INAUDIBLE] the one I'm talking about? PHILIPPE RIGOLLET: Yeah, I know, but I don't think the answer to your question is yes. So k is the general index, right? So there's no k. k does not exist. k just is here for me to tell me for k equals 1 to r. So this is an r by r matrix. And so there is no k there. So if you wanted the generic term, if I wanted to put 1 in the middle on the j-th row and k-th column, that would be x-- so j-th row would be x sub k to the power of j. That would be the-- And so now, this is basically the sum-- well, that should not be strictly-- So that would be for j and k between 1 and r. So this is the formula that get when you try to expand this Vandermonde determinant. You have to do it only once when you're a sophomore typically. And then you can just go on Wikipedia to do it. That's what I did. I actually made a mistake copying it. The first one should be 1 less than or equal to j. And the last one should be k less than or equal to r. And now what you have is the product of the differences of xj and xk. And for this thing to be non-zero, you need all the terms to be non-zero. And for all the terms to be non-zero, you need to have no xi, xj, and no xj, xk that are identical. If all those are different numbers, then this product is going to be different from 0. And those are different numbers, because those are r possible values that your random verbal takes. You're not going to say that it takes two with probability 1.5-- sorry, two with probability 0.5 and two with probability 0.25. You're going to say it takes two with probability 0.75 directly. So those xj's are different. These are the different values that your random variable can take. Remember, xj, xk was just the different values x1 to xr-- sorry-- was the different values that your random variable can take. Nobody in their right mind would write twice the same value in this list. So my Vandermonde is non-zero. So I can invert it. And I have a one-to-one correspondence between my entire PMF and the first r minus 1's moments to which I append the number 1, which is really the moment of order 0 again. It's E of X to the 0-th, which is 1. So good news, I only need r minus 1 parameters to describe r minus 1 parameters. And I can choose either the values of my PMF. Or I can choose the r minus 1 first moments. So the moments tell me something. Here, it tells me that if I have a discrete distribution with r possible values, I only need to compute r minus 1 moments. So this is better than Weierstrass Approximation Theorem. This tells me exactly how many moments I need to consider. And this is for any distribution. This is not a distribution that's parametrized by one parameter, like the Poisson or the binomial or all this stuff. This is for any distribution under a finite number. So hopefully, if I reduce the family of PMFs that I'm looking at to a one-parameter family, I'm actually going to need to compute much less than r minus 1 values. But this is actually hopeful. It tells you that the method of moments is going to work for any distribution. You just have to invert a Vandermonde matrix. So just the conclusion-- the statistical conclusion-- is that moments contain important information about the PMF and the PDF. If we can estimate these moments accurately, we can solve for the parameters of the distribution and recover the distribution. And in a parametric setting, where knowing P theta amounts to knowing theta, which is identifiability-- this is not innocuous-- it is often the case that even less moments are needed. After all, if theta is a one dimensional parameter, I have one parameter to estimate. Why would I go and get 25 moments to get this one parameter. Typically, there is actually-- we will see that the method of moments just says if you have a d dimensional parameter, just compute d moments, and that's it. But this is only on a case-by-case basis. I mean, maybe your model will totally screw up its parameters and you actually need to get them. I mean, think about it, if the function is parameterized just by its 27th moment-- like, that's the only thing that matters in this distribution, I just describe the function, it's just a density, and the only thing that can change from one distribution to another is this 27th moment-- well, then you're going to have to go get the 27th moment. And that probably means that your modeling step was actually pretty bad. So the rule of thumb, if theta is in Rd, we need d moments. So what is the method of moments? That's just a good old trick. Replace the expectation by averages. That's the beauty. The moments are expectations. So let's just replace the expectations by averages and then do it with the average version, as if it was the true one. So for example, I'm going to talk about population moments, when I'm computing them with the true distribution, and I'm going to talk about them empirical moments, when I talk about averages. So those are the two quantities that I have. And now, what I hope is that there is. So this is basically-- everything is here. That's where all the money is. I'm going to assume there's a function psi that maps my parameters-- let's say they're in Rd-- to the set of the first d moments. Well, what I want to do is to come from this guy back to theta. So it better be that this function is-- invertible. I want this function to be invertible. In the Vandermonde case, this function with just a linear function-- multiply a matrix by theta. Then inverting a linear function is inverting the matrix. Then this is the same thing. So now what I'm going to assume-- and that's key for this method to work-- is that this theta-- so this function psi is one to one. There's only one theta that gets only one set of moments. And so if it's one to one, I can talk about its inverse. And so now, I'm going to be able to define theta as the inverse of the moments-- the reciprocal of the moments. And so now, what I get is that the moment estimator is just the thing where rather than taking the true guys in there, I'm actually going to take the empirical moments in there. Before we go any further, I'd like to just go back and tell you that this is not completely free. How well-behaved your function psi is going to play a huge role. Can somebody tell me what the typical distance-- if I have a sample of size n, what is the typical distance between an average and the expectation? What is the typical distance? What is the order of magnitude as a function of n between xn bar and its expectation. AUDIENCE: 1 over square root of n. PHILIPPE RIGOLLET: 1 over square root n. That's what the central limit theorem tells us, right? The central limit theorem tells us that those things are basically a Gaussian, which is of order of 1 divided by its square of n. And so basically, I start with something which is 1 over square root of n away from the true thing. Now, if my function psi inverse is super steep like this-- that's psi inverse-- then just small fluctuations, even if they're of order 1 square root of n, can translate into giant fluctuations in the y-axis. And that's going to be controlled by how steep psi inverse is, which is the same as saying how flat psi is-- how flat is psi. So if you go back to this Vandermonde inverse, what it's telling you is that if this inverse matrix blows up this guy a lot-- so if I start from a small fluctuation of this thing and then they're blowing up by applying the inverse of this matrix, things are not going to go well. Anybody knows what is the number that I should be looking for? So that's from, say, numerical linear algebra numerical methods. When I have a system of linear equations, what is the actual number I should be looking at to know how much I'm blowing up the fluctuations? Yeah. AUDIENCE: Condition number? PHILIPPE RIGOLLET: The condition number, right. So what's important here is the condition number of this matrix. If the condition number of this matrix is small, then it's good. It's not going to blow up much. But if the condition number is very large, it's just going to blow up a lot. And the condition number is the ratio of the largest and the smallest eigenvalues. So you'll have to know what it is. But this is how all these things get together. So the numerical stability translates into statistical stability here. And numerical means just if I had errors in measuring the right hand side, how much would they translate into errors on the left hand side. So the error here is intrinsic to statistical questions. So that's my estimator, provided that it exists. And I said it's a one to one, so it should exist, if I assume that psi is invertible. So how good is this guy? That's going to be definitely our question-- how good is this thing. And as I said, there's chances that if psi is really steep, then it should be not very good-- if psi inverse is very steep, it should not be very good, which means that it's-- well, let's just leave it to that. So that means that I should probably see the derivative of psi showing up somewhere. If the derivative of psi inverse, say, is very large, then I should actually have a larger variance in my estimator. So hopefully, just like we had a theorem that told us that the Fisher information was key in the variance of the maximum likelihood estimator, we should have a theorem that tells us that the derivative of psi inverse is going to have a key role in the method of moments. So let's do it. So I'm going to talk to you about matrices. So now, I have-- So since I have to manipulate d numbers at any given time, I'm just going to concatenate them into a vector. So I'm going to call capital M theta-- so that's basically the population moment. And I have M hat, which is just m hat 1 to m hat d. And that's my empirical moment. And what's going to play a role is what is the variance-covariance of the random vector. So I have this vector 1-- do I have 1? No, I don't have 1. So that's a d dimensional vector. And here, I take the successive powers. Remember, that looks very much like a column of my Vandermonde matrix. So now, I have this random vector. It's just the successive powers of some random variable X. And the variance-covariance matrix is the expectation-- so sigma-- of theta. The theta just means I'm going to take expectations with respect to theta. That's the expectation with respect to theta of this guy times this guy transpose minus the same thing but with the expectation inside. Why do I do X, X1. I have X, X2, X3. X, X2, Xd times the expectation of X, X2, Xd. Everybody sees what this is? So this is a matrix where if I look at the ij-th term of this matrix-- or let's say, jk-th term, so on row j and column k, I have sigma jk of theta. And it's simply the expectation of X to the j plus k-- well, Xj Xk minus expectation of Xj expectation of Xk. So I can write this as m j plus k of theta minus mj of theta times mk of theta. So that's my covariance matrix of this particular vector that I define. And now, I'm going to assume that psi inverse-- well, if I want to talk about the slope in an analytic fashion, I have to assume that psi is differentiable. And I will talk about the gradient of psi, which is, if it's one dimensional, it's just the derivative. And here, that's where notation becomes annoying. And I'm going to actually just assume that so now I have a vector. But it's a vector of functions and I want to compute those functions at a particular value. And the value I'm actually interested in is at the m of theta parameter. So psi inverse goes from the set of moments to the set of parameters. So when I look at the gradient of this guy, it should be a function that takes as inputs moments. And where do I want this function to be evaluated at? At the true moment-- at the population moment vector. Just like when I computed my Fisher information, I was computing it at the true parameter. So now, once they compute this guy-- so now, why is this a d by d gradient matrix? So I have a gradient vector when I have a function from rd to r. This is the partial derivatives. But now, I have a function from rd to rd. So I have to take the derivative with respect to the arrival coordinate and the departure coordinate. And so that's the gradient matrix. And now, I have the following properties. The first one is that the law of large numbers tells me that theta hat is a weakly or strongly consistent estimator. So either I use the strong law of large numbers or the weak law of large numbers, and I get strong or weak consistency. So what does that mean? Why is that true? Well, because now so I really have the function-- so what is my estimator? Theta hat this psi inverse of m hat 1 to m hat k. Now, by the law of large numbers, let's look only at the weak one. Law of large numbers tells me that each of the mj hat is going to converge in probability as n to infinity to the-- so the empirical moments converge to the population moments. That's what the good old trick is using, the fact that the empirical moments are close to the true moments as n becomes larger. And that's because, well, just because the m hat j's are averages, and the law of large numbers works for averages. So now, plus if I look at my continuous mapping theorem, then I have that psi inverse is continuously differentiable. So it's definitely continuous. And so what I have is that psi inverse of m hat 1 m hat d converges to psi inverse m1 to md, which is equal to of theta star. So that's theta star. By definition, we assumed that that was the unique one that was actually doing this. Again, this is a very strong assumption. I mean, it's basically saying, if the method of moment works, it works. So the fact that psi inverse one to one is really the key to making this guy work. And then I also have a central limit theorem. And the central limit theorem is basically telling me that M hat is converging to M even in the multivariate sense. So if I look at the vector of M hat and the true vector of M, then I actually make them go-- I look at the difference for scale by square root of n. It goes to some Gaussian. And usually, we would see-- if it was one dimensional, we would see the variance. Then we see the variance-covariance matrix. Who has never seen the-- well, nobody answers this question. Who has already seen the multivariate central limit theorem? Who was never seen the multivariate central limit theorem? So the multivariate central limit theorem is basically just the slight extension of the univariate one. It just says that if I want to think-- so the univariate one would tell me something like this-- and 0. And then I would have basically the variance of X to the j-th. So that's what the central limit theorem tells me. This is an average. So this is just for averages. The central limit theorem tells me this. Just think of X to the j-th as being y. And that would be true. Everybody agrees with me? So now, this is actually telling me what's happening for all these guys individually. But what happens when those guys start to correlate together? I'd like to know if they actually correlate the same way asymptotically. And so if I actually looked at the covariance matrix of this vector-- so now, I need to look at a matrix which is d by d-- then would those univariate central limit theorems tell me-- so let me right like this, double bar. So that's just the covariance matrix. This notation, V double bar is the variance-covariance matrix. So what this thing tells me-- so I know this thing is a matrix, d by d. Those univariate central limit theorems only give me information about the diagonal terms. But here, I have no idea where the covariance matrix is. This guy is telling me, for example, that this thing is like variance of X to the j-th. But what if I want to find off-diagonal elements of this matrix? Well, I need to use a multivariate central limit theorem. And really what it's telling me is that you can actually replace this guy here-- so that goes in distribution to some normal mean 0, again. And now, what I have is just sigma of theta, which is just the covariance matrix of this vector X, X2, X3, X4, all the way to Xd. And that's it. So that's a multivariate Gaussian. Who has never seen a multivariate Gaussian? Please, just go on Wikipedia or something. There's not much to know about it. But I don't have time to redo probability here. So we're going to have to live with it. Now, to be fair, if your goal is not to become a statistical savant, we will stick to univariate questions in the scope of homework and exams. So now, what was the delta method telling me? It was telling me that if I had a central limit theorem that told me that theta hat was going to theta, or square root of n theta hat minus theta was going to some Gaussian, then I could look at square root of Mg of theta hat minus g of theta. And this thing was also going to a Gaussian. But what it had to be is the square of the derivative of g in the variance. So the delta method, it was just a way to go from square root of n theta hat n minus theta goes to some N, say 0, sigma squared, to-- so delta method was telling me that this was square root Ng of theta hat N minus g of theta was going in distribution to N0 sigma squared g prime squared of theta. That was the delta method. Now, here, we have a function of those guys. The central limit theorem, even the multivariate one, is only guaranteeing something for me regarding the moments. But now, I need to map the moments back into some theta, so I have a function of the moments. And there is something called the multivariate delta method, where derivatives are replaced by gradients. Like, they always are in multivariate calculus. And rather than multiplying, since things do not compute, rather than choosing which side I want to put the square, I'm actually just going to take half of the square on one side and the other half of the square on the other side. So the way you should view this, you should think of sigma squared times g prime squared as being g prime of theta times sigma squared times g prime of theta. And now, this is completely symmetric. And the multivariate delta method is basically telling you that you get the gradient here. So you start from something that's like that over there, a sigma-- so that's my sigma squared, think of sigma squared. And then I premultiply by the gradient and postmultiply by the gradient. The first one is transposed. The second one is not. But that's very straightforward extension. You don't even have to understand it. Just think of what would be the natural generalization. Here, by the way, I wrote explicitly what the gradient of a multivariate function is. So that's a function that goes from Rd to Rk. So now, the gradient is a d by k matrix. And so now, for this guy, we can do it for the method or moments. And we can see that basically we're going to have this scaling that depends on the gradient of the reciprocal of psi, which is normal. Because if psi is super steep, if psi inverse is super steep, then the gradient is going to be huge, which is going to translate into having a huge variance for the method of moments. So this is actually the end. I would like to encourage you-- and we'll probably do it on Thursday just to start. But I encourage you do it in one dimension, so that you know how to use the method of moments, you know how to do a bunch of things. Do it in one dimension and see how you can check those things. So just as a quick comparison, in terms of the quadratic risk, the maximum likelihood estimator is typically more accurate than the method of moments. What is pretty good to do is, when you have a non-concave likelihood function, what people like to do is to start with the method of moments as an initialization and then run some algorithm that optimizes locally the likelihood starting from this point, because it's actually likely to be closer. And then the MLE is going to improve it a little bit by pushing the likelihood a little better. So of course, the maximum likelihood is sometimes intractable. Whereas, computing moments is fairly doable. If the likelihood is concave, as I said, we can use optimization algorithms, such as interior-point methods or gradient descent, I guess, to maximize it. And if the likelihood is non-concave, we only have local heuristics. Risk And that's what I meant-- you have only local maxima. And one trick you can do-- so your likelihood looks like this, and it might be the case that if you have a lot of those peaks, you basically have to start your algorithm in each of those peaks. But the method of moments can actually start you in the right peak, and then you just move up by doing some local algorithm for maximum likelihood. So that's not key. But that's just if you want to think about algorithmically how I would end up doing this and how can I combine the two. So I'll see you on Thursday. Thank you.
MIT_18650_Statistics_for_Applications_Fall_2016
22_Generalized_Linear_Models_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: We're talking about generalized linear models. And in generalized linear models, we generalize linear models in two ways. The first one is to allow for a different distribution for the response variables. And the distributions that we wanted was the exponential family. And this is a family that can be generalized over random variables that are defined on c or q in general, with parameters rk. But we're going to focus in a very specific case when y is a real valued response variable, which is the one you're used to when you're doing linear regression. And the parameter theta also lives in r. And so we're going to talk about the canonical case. So that's the canonical exponential family, where you have a density, theta of x, which is of the form, exponential plus. And then, we have y, which interacts with theta only by taking a product. Then, there's a term that depends only on theta, some dispersion parameter phi. And then, we have some normalization factor. Let's call it c of y phi. So it really should not matter too much, so it's c of y phi, and that's really just the normal position factor. And here, we're going to assume that phi is known. I have no idea what I write. I don't know if you guys can read. I don't know what chalk has been used today, but I just can't see it. That's not my fault. All right, so we're going to assume that phi is known. And so we saw that several distributions that we know well, including the Gaussian for example, belong to this family. And there's other ones, such as Poisson-- Poisson and Bernoulli. So if the PMF has this form, if you have a discrete random variable, this is also valid. And the reason why we introduced this family is because there are going to be some properties that we know that this thing here, this function, b of theta, is essentially what completely characterizes your distribution. So if phi is fixed, we know that the interaction is the form. And this really just comes from the fact that we want the function to integrate to one. So this b here in the canonical form encodes everything we want to know. If I tell you what b of theta is-- and of course, I tell you what phi is, but let's say for a second that phi is equal to one. If I tell you this b of theta, you know exactly what distribution I'm talking about. So it should encode everything that's specific to this distribution, such as mean, variance, all the moments that you would want. And we'll see how we can compute from this thing the mean and the variance, for example. So today, we're going to talk about likelihood, and we're going to start with the likelihood function or the log likelihood for one observation. From this, we're going to do some computations, and then, we'll move on to the actual log likelihood based on n independent observations. And here, as we will see, the observations are not going to be identically distributed, because we're going to want each of them, conditionally on x to be a different function of x, where theta is just a different function of x for each of the observation. So remember, the log likelihood-- and this is for one observation-- is just the log of the density, right? And we have this identity that I mentioned at the end of the class on Tuesday. And this identity is just that the expectation of the derivative of this guy with respect to theta is equal to 0. So let's see why. So if I take the derivative with respect to theta, of log f, theta of x, what I get is the derivative with respect to theta of f, theta of x, divided by f theta of x. Now, if I take the expectation of this guy, with respect to this theta as well, what I get is that this thing-- what is the expectation? Well, it's just the integral against f theta. Or if I'm in a discrete case, I just have the sum against f theta, if it's a pmf. Just the definition, the expectation of x, is either the integral-- well, let's say of h of x-- is integral of h of x. F theta of x-- if this is discrete or is just the sum of h of x, f theta of x. If x is discrete-- so if it's continuous, you put this soft sum. This guy is the same thing, right? So I'm just going to illustrate the case when it's continuous. So this is what? Well, this is the integral of partial derivative with respect to theta, of f theta of x, divided by f theta of x, all time f theta of x-- dx. And now, this f theta is canceled, so I'm actually left with the integral of the derivative, which I'm going to write as the derivative of the integral. But f theta being density for any value of theta that I can take, this is the function. As a function of theta, this function is constantly equal to 1. For any theta that I take it, it takes value of 1. So this is constantly equal to 1. I put three bars to see that for any value of theta, this is 1, which actually tells me that the derivative is equal to 0. OK, yes? AUDIENCE: What is the first [INAUDIBLE] that you wrote on the board? PHILIPPE RIGOLLET: That's just the definition of the derivative of the log of a function? AUDIENCE: OK. PHILIPPE RIGOLLET: Log of f prime is f prime over f. That's a log, yeah. Just by elimination. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm sorry. AUDIENCE: When you write a squiggle that starts with an l, I assume it's lambda. PHILIPPE RIGOLLET: And you do good, because that's probably how my mind processes. And so I'm like, yeah, l. Here is enough information. OK, everybody is good with this? So that was convenient. So it just said that the expectation of the derivative of the log likelihood is equal to 0. That's going to be our first identity. Let's move onto the second identity, using exactly the same trick, which is let's hope that at some point, we have the integral of this function that's constantly equal to 1 as a function of theta, and then use the fact that its derivative is equal to 0. So if I start taking the second derivative of the log of f theta, so what is this? Well, it's the derivative of this guy here, so I'm going to go straight to it. So it's second derivative, f theta of x, times f theta of x, minus first derivative of f theta of x, times first derivative of f theta of x. Here is some super important stuff-- no, I'm kidding. So you can still see that guy over there? So it's just the square. And then, I divide by f theta of x squared. So here I have the second derivative, times f itself. And here, I have the product of the first derivative with itself. So that's the square. So now, I'm going to integrate this guy. So if I take the expectation of this thing here, what I get is the integral. So here, the only thing that's going to happen when I'm going to take my integral is that one of those squares is going to cancel against f theta, right? So I'm going to get the second derivative minus the second derivative squared. And then, I'm divided by f theta. And I know that this thing is equal to 0. Now, one of these guys here-- sorry, why do I have-- so I have this guy here. So this guy here is going to cancel. So this is what this is equal to-- the integral of the partial, so the second derivative of f theta of x, because those two guys cancel-- minus the integral of the second derivative. And this is telling me what? Yeah, I'm losing one, because I have some weird sequences. Thank you. OK, this is still positive. I want to say that this thing is actually equal to 0. But then, it gives me some weird things, which are that I have an integral of a positive function, which is equal to 0. Yeah, that's what I'm thinking of doing. But I'm going to get 0 for this entire integral, which means that I have the integral of a positive function, which is equal to 0, which means that this function is equal to 0, which sounds a little bad-- basically, tells me that this function, f theta, is linear. So I went a little too far, I believe, because I only want to prove that the expectation of the second derivative-- Yes, so I want to pull this out. So let's see, if I keep rolling with this, I'm going to get-- well, no because the fact that it's divided by f theta, means that, indeed, the second derivative is equal to 0. So it cannot do this here. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: OK, but let's write it like this. You're right, so this is what? This is the expectation of the partial with respect to theta of f theta of x, divided by f theta of x squared. And this is exactly the derivative of the log, right? So indeed, this thing is equal to the expectation with respect to theta of the partial of l-- of log f theta, divided by partial theta. All right, so this is one of the guys that I want squared. This is one of the guys that I want. And this is actually equal, so this will be equal to the expectation-- AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh, right, so this term should be equal to 0. This was not 0. You're absolutely right. So at some point, I got confused, because I thought putting this equal to 0 would mean that this is 0. But this thing is not equal to 0. So this thing, you're right. I take the same trick as before, and this is actually equal to 0, which means that now I have what's on the left-hand side, which is equal to what's on the right-hand side. And if I recap, I get that e theta of the second derivative of the log of f theta is equal to minus-- because I had a minus sign here-- to the expectation with respect to theta, of log of f theta, divided by theta squared. Thank you for being on watch when I'm falling apart. All right, so this is exactly what you have here, except that both terms have been put on the same side. All right, so those things are going to be useful to us, so maybe, we should write them somewhere here. And then, we have that the expectation of the second derivative of the log is equal to minus the expectation of the square of the first derivative. And this is, indeed, my Fisher information. This is just telling me what is the second derivative of my log likelihood at theta, right? So everything is with respect to theta when I take these expectations. And so it tells me that the expectation of the second derivative-- at least first of all, what it's telling me is that it's concave, because the second derivative of this thing, which is the second derivative of kl divergence, is actually minus something which is must be non-negative. And so it's telling me that it's concave here at this [INAUDIBLE]. And in particular, it's also telling me that it has to be strictly positive, unless the derivative of f is equal to 0. Unless f is constant, then it's not going to change. All right, do you have a question? So now, let's use this. So what does my log likelihood look like when I actually compute it for this canonical exponential family. We have this exponential function, so taking the log should make my life much easier, and indeed, it does. So if I look at the canonical, what I have is that the log of f theta of x, it's equal simply to y theta minus b of theta, divided by phi, plus this function that does not depend on theta. So let's see what this tells me. Let's just plug-in those equalities in there. I can take the derivative of the right-hand side and just say that in expectation, it's equal to 0. So if I start looking at the derivative, this is equal to what? Well, here I'm going to pick up only y. Sorry, this is a function of y. I was talking about likelihood, so I actually need to put the random variable here. So I get y minus the derivative of b of theta. Since it's only a function of theta, I'm just going to write b prime, is at OK-- rather than having the partial with respect to theta. Then, this is a constant. This does not depend on theta, so it goes away. So if I start taking the expectation of this guy, I get the expectation of this guy, which is the expectation of y, minus-- well, this does not depend on y, so it's just itself-- b prime of theta. And the whole thing is divided by phi. But from my first equality over there, I know that this thing is actually equal to 0. We just proved that. So in particular, it means that since phi is non-zero, it means that this guy must be equal to this guy-- or phi is not infinity. And so that implies that the expectation with respect to theta of y is equal to b prime of theta. I'm sorry, you're not registered in this class. I'm going to have to ask you to leave. I'm not kidding. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: You are? I've never seen you here. I saw you for the first lecture. OK. All right, so e theta of y is equal to b prime of theta. Everybody agrees with that? So this is actually nice, because if I give you an exponential family, the only thing I really need to tell you is what b theta is. And if I give you b of theta, then computing a derivative is actually much easier than having to integrate y against the density itself. You could really have fun and try to compute this, which you would be able to do, right? And then, there's the plus c of y phi, blah, blah, blah-- dy. And that's the way you would actually compute this thing. Sorry, this guy is here. That would be painful. I don't know what this normalization looks like, so it would have to also explicit that, so I can actually compute this thing. And you know, just the same way, if you want to compute the expectation of a Gaussian-- well, the expectation of a Gaussian is not the most difficult one. But even if you compute the expectation of a Poisson, you start to have to work in a little bit. There's a few things that you have to work through. Here, I'm just telling you, all you have to know is what b of theta is, and then, you can just take the derivative. Let's see what the second equality is going to give us. OK, so what is the second equality? It's telling me that if I look at the second derivative, and then I take its expectation, I'm going to have something which is equal to negative this guy squared. Sorry, that was the log, right? We've already computed this first derivative of the likelihood. It's just the expectation of the square of this thing here. So expectation of the derivative, with respect to theta of log, f theta of x, divided by partial theta squared. This is equal to the expectation of the square of y, minus b theta, divided by phi squared-- b prime, theta squared. OK, sorry, I'm actually going to move on with the-- so if I start computing, what is this thing? Well, we just agreed that this was what? The expectation of theta, right? So that's just the expectation of y. We just computed it here. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, that's b prime. There's a derivative here. So now, this is what? This is simply-- anyone? I'm sorry? Variance of y, but you're scaling by phi squared. OK, so this is negative of the right-hand side of our inequality. And now, I just have to take one more derivative to this guy. So now, if I look at the left-hand side now, I have that the second derivative of log, of f theta of y, divided by partial of theta squared. So this thing is equal to-- well, no, I'm not left with much. The white part is going to go away, and I'm left only with the second derivative of theta, minus the second derivative theta, divided by phi. So if I take expectation-- well, it just doesn't change. This is deterministic. So now, what I've established is that this guy is equal to negative this guy. So those two things, the signs are going to go away. And so this implies that the variance of y is equal to b prime prime theta. And then, I have a phi square in denominator that cancels only one of the phi squares, so it's time phi. So now, I have that my second derivative-- since I know phi is completely determining the variance. So basically, that's why b is called the cumulant generating function. It's not generating moments, but cumulants. But cumulants, in this case, correspond, basically, to the moments, at least for the first two. If I start going farther, I'm going to have more combinations of the expectation of y3, y2, and y itself. But as we know, those are the ones that are usually the most useful, at least if we're interested in asymptotic performance. The central limit theorem tells us that all that matters are the first two moments, and then, the rest is just going to go and say well, it doesn't matter. It's all going to [INAUDIBLE] anyway. So let's go to a Poisson for example. So if I had a Poisson distribution-- so this is a discrete distribution. And what I know is that f-- let me call mu the parameter of y. So it's mu to the y, divided by y factorial, exponential minus mu. OK so mu is usually called lambda, and y is usually called x, that's why it takes me to a little bit of time. But it usually it's lambda to the x over factorial x, exponential minus lambda. Since this is just the series expansion of the exponential when I sum those things from 0 to infinity, this thing sums to 1. But then, if I wanted to start understanding what the expectation of this thing is-- so if I want to understand the expectation with respect to mu of y, then, I would have to compute the sum from k equals 0 to infinity of k, times mu to the k, over factorial of k, exponential minus mu-- which means that I would, essentially, have to take the derivative of my series in the end. So I can do this. This is a standard exercise. You've probably done it when you took probability. But let's see if we can actually just read it off from the first derivative of b. So to do that, we need to write this in the form of an exponential, where there is one parameter that captures mu, that interacts with y, just doing this parameter times y, and then something that depends only on y, and then something that depends only on mu. That's the important one. That's going to be our B. And then, there's going to be something that depends only on y. So let's write this and check that this f mu, indeed, belongs to this canonical exponential family. So I definitely have an exponential that comes from this guy. So I have minus mu. And then, this thing is going to give me what? It's going to give me plus y log mu. And then, I'm going to have minus log of y factorial. So clearly, I have a term that depends only on mu, terms that depend only on y, and I have a product of y and something that depends on mu. If I want to be canonical, I must have this to be exactly the parameter theta itself. So I'm going to call this guy theta. So theta is log mu, which means that mu is equal to e to the theta. And so wherever I see mu, I'm going to replace it by [INAUDIBLE] the theta, because my new parameter now, is theta. So this is what? This is equal to exponential y times theta. And then, I'm going to have minus e of theta. And then, who cares, something that depends only on mu. So this is my c of y, and phi is equal to 1 in this case. So that's all I care about. So let's use it. So this is my canonical exponential family. Y interacts with theta exactly like this. And then, I have this function. So this function here must be b of theta. So from this function, exponential theta, I'm supposed to be able to read what the mean is. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Because since in this course I always know what the dispersion is, I can actually always absorb it into theta from one. But here, it's really of the form y times something divided by 1, right? If it was like log of mu divided by phi, that would be the question of whether I want to call phi my dispersion, or if I want to just have it in there. This makes no difference in practice. But the real thing is it's never going to happen that this thing, this version, is going to be an exact number. If it's an actual numerical number, this just means that this number should be absorbed in the definition of theta. But if it's something that is called sigma, say, and I will assume that sigma is known, then it's probably preferable to keep it in the dispersion, so you can see that there's this parameter here that you can, essentially, play with. It doesn't make any difference when you know phi. So now, if I look at the expectation of some y-- so now, I'm going to have y, which follows my Poisson mu. I'm going to look at the expectation, and I know that the expectation is b prime of theta. Agreed? That's what I just erased, I think. Agreed with this, the derivative? So what is this? Well, it's the derivative of e to the theta, which is e to the theta, which is mu. So my Poisson is parametrized by its mean. I can also compute the variance, which is equal to minus the second derivative of-- no, it's equal to the second derivative of b. Dispersion is equal to 1. Again, if I took phi elsewhere, I would see it here as well. So if I just absorbed phi here, I would see it divided here, so it would not make any difference. And what is the second derivative of the exponential? Still the exponential-- so it's still equal to mu. So that certainly makes our life easier. Just one quick from remark-- here's the function. I am giving you problem-- can the b function-- can it ever be equal to log of theta? Who says yes? Who says no? Why? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, so what I've learned from this-- it's sort of completely analytic, right? So we just took derivatives, and this thing just happened. This thing actually allowed us to relate the second derivative of b to the variance. And one thing that we know about a variance is that this is non-negative. And in particular, it's always positive. If they give you a canonical exponential family that has zero variance, trust me, you will see it. That means that this thing is not going to look like something that's finite, and it's going to have a point mass. It's going to take value infinity at one point. So this will, basically, never happen. This thing is, actually, strictly positive, which means that this thing is always strictly concave. It means that the second derivative of this function, b, has to be strictly positive, and so that the function is convex. So this is concave, so this is definitely not working. I need to have something that looks like this when I talk about my b. So f theta squared-- we'll see a bunch of exponential theta. And there's a bunch of them. But if you started writing something, and you find b-- try to think of the plot of b in your mind, and you find that b looks like it's going to become concave, you've made a sign mistake somewhere. All right, so we've done a pretty big parenthesis to try to characterize what the distribution of y was going to be. We wanted to extend from, say, Gaussian to something else. But when we're doing regression, which means generalized linear models, we are not interested in the distribution of y but really the conditional distribution of y given x. So I need now to couple those back together. So what I know is that this same mu, in this case, which is the expectation-- what I want to say is that the conditional expectation of y given x-- this is some mu of x. When we did linear models, we said well, this thing was some x transpose beta for linear models. And the whole premise of this chapter is to say well, this might make no sense, because x transpose beta can take the entire range of real values. Whereas, this mu can take only a partial range. So even if you actually focus on the Poisson, for example, we know that the expectation of a Poisson has to be a non-negative number-- actually, a positive number as soon as you have a little bit of variance. It's mu itself-- mu is a positive number. And so it's not going to make any sense to assume that mu of x is equal to x transpose beta, because you might find some x's for which this value ends up being negative. And so we're going to need, what we call, the link function that relates, that transforms mu, maps onto the real line, so that you can now express it of the form x transpose beta. So we're going to take not this, but we're going to assume that g of mu of x is not equal to x transpose beta, and that's the generalized linear models. So as I said, it's weird to transform x transpose beta-- a mu to make it take the real line. At least to me, it feels a bit more natural to take x transpose beta and make it fit to the particular distribution that I want. And so I'm going to want to talk about g and g inverse at the same time. So I'm going to actually take always g. So g is my link function, and I'm going to want g to be continuous differentiable. OK, let's say that it has a derivative, and its derivative is continuous. And I'm going to want g to be strictly increasing. And that actually implies that g inverse exists. Actually, that's not true. What I'm also going to want is that g of mu spans-- how do I do this? So I want the g, as I arrange for all possible values of mu, whether they're all positive values, or whether they're values that are limited between the intervals 0, 1, I want those to span the entire real line, so that when I want to talk about g inverses define over the entire real line, I know where I started. So this implies that gene inverse exists. What else does it imply about g inverse? So for a function to be invertible, I only need for it to be strictly monotone. I don't need it to be strictly increasing. So in particular, the fact that I picked increasing implies that this guy is actually increasing. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: That's the image. So this is my link function, and this slide is just telling me I want my function to be invertible, so I can talk about g inverse. I'm going to switch between the two. So what link functions am I going to get? So for linear models, we just said there's no link function, which is the same as saying that the link function is identity, which certainly satisfies all these conditions. It's invertible. It has all these nice properties, but might as well not talk about it. For Poisson data, when we assume that the conditional distribution of y given x is Poisson, the mu, as I just said, is required to be positive. So I need a g that goes from the interval 0 infinity to the entire real line. I need a function that starts from one end and just takes-- not only the positive values are split between positive and negative values. And here, for example, I could take the log link. So the log is defined on this entire interval. And as I range from 0 to plus infinity, the log is ranging from negative infinity to plus infinity. You can probably think of other functions that do that, like 2 times log. That's another one. But there's many others you can think of. But let's say the log is one of them that you might want to think about. It is unnatural in the sense that it's one of the first function we can think of. We will see, also, that it has another canonical property that makes it a natural choice. The other one is the other example, where we had an even stronger condition on what mu could be. Mu could only be a number between 0 and 1, that was the probability of success of a coin flip-- probability of success of a Bernoulli random variable. And now, I need g to map 0, 1 to the entire real line. And so here are a bunch of things that you can come up with, because now you start to have maybe-- I will soon claim that this one, log of mu, divided by 1 minus mu is the most natural one. But maybe, if you had never thought of this, that might not be the first function you would come up with, right? You mentioned trigonometric functions, for example, so maybe, you can come up with something that comes from hyperbolic trigonometry or something. So what does this function do? Well, we'll see a picture, but this function does map the interval 0, 1 to the entire real line. We also discuss the fact that if we think reciprocally-- what I want if I want to think about g inverse, I want a function that maps the entire real line into the unit interval. And as we said, if I'm not a very creative statistician or probabilist, I can just pick my favorite continuous, strictly increasing cumulative distribution function, which as we know, will arise as soon as I have a density that has support on the entire real line. If I have support everywhere, then it means that my-- it is strictly positive everywhere, then, it means that my community distribution function has to be strictly increasing. And of course, it has to go from 0 to 1, because that's just the nature of those things. And so for example, I can take the Gaussian, that's one such function. But I could also take the double exponential that looks like an exponential on one end, and then an exponential on the other end. And basically, if you take capital phi, which is the standard Gaussian cumulative distribution function, it does work for you, and you can take its inverse. And in this case, we don't talk about, so this guy is called logit or logit. And this guy is called probit. And you see it, usually, every time you have a package on generalized linear models. You are trying to implement. You have this choice. And for what's called logistic regression-- so it's funny that it's called logistic regression, but you can actually use the probit link, which in this case, is called probit regression. But those things are essentially equivalent, and it's really a matter of taste. Maybe of communities-- some communities might prefer one over the other. We'll see that again, as I claimed before, the logistic, the logit one has a slightly more compelling argument for its reason to exist. I guess this one, the compelling argument is that it involved the standard Gaussian, which of course, is something that should show up everywhere. And then, you can think about crazy stuff. Even crazy gets name-- complimentary log, log, which is the log of minus, log 1 minus. Why not? So I guess you can iterate that thing. You can just put a log 1 minus in front of this thing, and it's still going to go. So that's not true. I have to put a minus and take-- no, that's not true. So you can think of whatever you want. So I claimed that the logit link is the natural choice, so here's a picture. I should have actually plotted the other one, so we can actually compare it. To be fair, I don't even remember how it would actually fit into those two functions. So the blue one, which is this one, for those of you don't see the difference between blue and red-- sorry about that. So this the blue one is the logistic one. So this guy is the function that does e to the x, over 1 plus e to the x. As you can see, this is a function that's supposed to map the entire real line into the intervals, 0, 1. So that's supposed to be the inverse of your function, and I claimed that this is the inverse of the logistic of the logit function. And the blue one, well, this is the Gaussian CDF, so you know it's clearly the inverse of the inverse of the Gaussian CDF. And that's the red one. That's the one that goes here. I would guess that the complimentary log, log is something that's probably going above here, and for which the slope is, actually, even a little flatter as you cross 0. So of course, this is not our link functions. These are the inverse of our link function. So what do they look like when actually, basically, flip my thing like this? So this is what I see. And so I can see that in blue, this is my logistic link. So it crosses 0 with a slightly faster rate. Remember, if we could use the identity, that would be very nice to us. We would just want to take the identity. The problem is that if I start having the identity that goes here, it's going to start being a problem. And this is the probit link, the phi verse that you see here. It's a little flatter. You can compute the derivative at zero of those guys. What is the derivative of the-- So I'm taking the derivative of log of x over 1 minus x. So it's 1 over x, minus 1 over 1 minus x. So if I look at 0.5-- sorry, this is the interval 0, 1. So I'm interested in the slope at 0.5. Yes, it's plus, thank you. So at 0.5, what I get is 2 plus 2. Yeah, so that's the slope that we get, and if you compute for the derivative-- what is the derivative of a phi inverse? Well, it's a little phi of x, divided by little phi of capital phi, inverse of x. So little phi at 1/2-- I don't know. Yeah, I guess I can probably compute the derivative of the capital phi at 0, which is going to be just that. 1 over square root of 2 pi, and then just say well, the slope has to be 1 over that. Square root 2 pi. So that's just a comparison, but again, so far, we do not have any reason to prefer one to the other. And so now, I'm going to start giving you some reasons to prefer one to the other. And one of those two-- and actually for each canonical family, there is something which is called the canonical link. And when you don't have any other reason to choose anything else, why not choose the canonical one? And the canonical link is the one that says OK, what I want is g to map mu onto the real line. But mu is not the parameter of my canonical family. Here for example, mu is e of theta, but the canonical parameter is theta. But the parameter of a canonical exponential family is something that lives in the entire real line. It was defined for all thetas. And so in particular, I can just take theta to be the one that's x transpose beta. And so in particular, I'm just going to try to find the link that just says OK, when I take g of mu, I'm going to map, so that's what's going to be. So I know that g of mu is going to be equal to x beta. And now, what I'm going to say is OK, let's just take the g that makes this guy equal to theta, so that this is theta that actually model, like x transpose beta. Feels pretty canonical, right? What else? What other central, easy choice would you take? This was pretty easy. There is a natural parameter for this canonical family, and it takes value on the entire real line. I have a function that maps mu onto the entire real line, so let's just map it to the actual parameter. So now, OK, why do I have this? Well, we've already figured that out. The canonical link function is strictly increasing. Sorry, so I said that now I want this guy-- so I want g of mu to be equal to theta, which is equivalent to saying that I want mu to be equal to g inverse of theta. But we know that mu is what-- b prime of theta. So that means that b prime is the same function as g inverse. And I claimed that this is actually giving me, indeed, a function that has the properties that I want, because before I said, just pick any function that has these properties. And now, I'm giving you a very hard rule to pick this, though you need still to check that it satisfies those conditions and particular, that it's increasing and invertible. And so for this to be increasing and invertible, strictly increasing and invertible, really what I need is that the inverse is strictly increasing and invertible, which is the case here, because b prime as we said-- well, b prime is the derivative of a strictly convex function. A strictly convex function has a second derivative that's strictly positive. We just figured that out using the fact that the variance was strictly positive. And if phi is strictly positive, then this thing has to be strictly positive. So if b prime, prime is strictly positive-- this is the derivative of a function called b prime. If your derivative is strictly positive, you are strictly increasing. And so we know that b prime is, indeed, strictly increasing. And what I need also to check-- well, I guess this is already checked on its own, because b prime is actually mapping all of our into the possible values. When theta ranges on the entire real line, then b prime ranges in the entire interval of the mean values that it can take. And so now, I have this thing that's completely defined. B prime inverse is a valid link. And it's called a canonical link. OK, so again, if I give you an exponential family, which is another way of saying I'll give you a convex function, b, which gives you some exponential family, then if you just take b prime inverse, this gives you the associated canonical link for this canonical exponential family. So clearly there's an advantage of doing this, which is I don't have to actually think about which one to pick if I don't want to think about it. But there's other advantages that come to it, and we'll see that in the representations. There's, basically, going to be some light cancellations that show up. So before we go there, let's just compute the canonical link for the Bernoulli distribution. So remember, the Bernoulli distribution has a PMF, which is part of the canonical exponential family. So the PMF of the Bernoulli is f theta of x. Let me just write it like this. So it's p to the y, let's say-- one minus p to the 1 minus y, which I will write as exponential y log p, plus 1 minus y, log 1 minus p. OK, we've done that last time. Now, I'm going to group my terms in y to see how y interacts with this parameter p. And what I'm getting is y, which is times log p divided by 1 minus p. And then, the only term that remains is log, 1 minus p. Now, I want this to be a canonical exponential family, which means that I just need to call this guy, so it is part of the exponential family. You can read that. If I want it to be canonical, this guy must be theta itself. So I have that theta is equal to log p, 1 minus p. If I invert this thing, it tells me that p is e to the theta, divided by 1, plus e to the theta. It's just inverting this function. In particular, it means that log, 1 minus p, is equal to log, 1 minus this thing. So the exponential thetas go away. So in the numerator, this is what I get. That's the log 1 minus this guy, which is equal to minus log 1, plus e to the theta. So I'm going a bit too fast, but these are very elementary manipulations-- maybe, it requires one more line to convince yourself. But just do it in the comfort of your room. And then, what you have is the exponential of y times theta, and then, I have minus log, 1 plus e theta. So this is the representation of the p and f of a Bernoulli distribution, as part of a member of the canonical exponential family. And it tells me that b of theta is equal to log 1, plus e of theta. That's what I have there. From there, I can compute the expectation, which hopefully, I'm going to get p as the mean and p times 1, minus p as the variance. Otherwise, that would be weird. So let's just do this. B prime of theta should give me the mean. And indeed, b prime of theta is e to the theta, divided by 1, plus e to the theta, which is exactly this p that I had there. OK just for fun-- well, I don't know. Maybe, that's not part of it. Yeah, let's not compute the second derivative. That's probably going to be on your homework at some point-- if not, on the final. So b prime now-- oh, I erased it, of course. G, the canonical link is b prime inverse. And I claim that this is going to give me the logit function, log of mu, over 1 minus mu. So let's check that. So b prime is this thing, so now, I want to find the inverse. Well, I should really call my inverse a function of p. And I've done it before-- all I have to do is to solve this equation, which I've actually just done it, that's where I'm actually coming from. So it's actually telling me that the solution of this thing is equal to log of p over 1 minus p. We just solve this thing both ways. And this is, indeed, logit of p by definition of logit. So b prime inverse, this function that seemed to come out of nowhere, is really just the inverse of b prime, which we know is the canonical link. And canonical is some sort of ad hoc choices that we've made by saying let's just take the link, such that d of mu is giving me the actual canonical parameter of theta. Yeah? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: You're right. Now, of course, I'm going through all this trouble, but you could see it immediately. I know this is going to be theta. We also have prior knowledge, hopefully, that the expectation of a Bernoulli is p itself. So right at this step, when I say that I'm going to take theta to be this guy, already knew that the canonical link was the logit-- because I just said oh, here's theta. And it's just this function of mu [INAUDIBLE].. OK, so you can do that for a bunch of examples, and this is what they're going to give you. So the Gaussian case, b of theta-- we've actually computed it, actually, once. This is theta squared over 2. So the derivative of this thing is really just theta, which means that g or g inverse is actually equal to the identity. And again, sanity check-- when I'm in the Gaussian case, there's nothing general about general linear models if you don't have a link. The Poisson case-- you can actually check. Did we do this, actually? Yes we did. So that's when we had this e of theta. And so b is e of theta, which means that the natural link is the inverse, which is log, which is the inverse of exponential. And so that's logarithm link, which as I said, I used the word natural. You can also use the word canonical if you want to describe this function as being the right function to map the positive real line to the entire real line. The Bernoulli-- we just did it. So b-- the cumulative enduring function is log of 1, plus e of theta, which is log of mu over 1 minus mu. And gamma function-- where you have the thing you're going to see is minus log of minus [INAUDIBLE].. You see the reciprocal link is the link that actually shows up, so minus 1 over mu. That maps. So are there any questions about the canonical links, canonical families? I use the word canonical a lot. But is everything fitting together right now? So we have this function. We have canonical exponential family, by assumption. It has a function, b, which contains every information we want. At the beginning of the lecture, we established that it has information about the mean in the first derivative, about the variance in the second derivative. And it's also giving us a canonical link. So just cherish this b once you've found it, because it's everything you need. Yeah? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I don't know, a political preference? I don't know, honestly. If I were a serious practitioner, I probably would have a better answer for you. At this point, I just don't. I think it's a matter of practice and actual preferences. You can also try both. We didn't mention it, but there's this idea of cross-validation-- well, we mentioned it without going too much into detail. But you could try both and see which one performs best on a yet unseen data set. In terms of prediction, just say I prefer this one of the two, because this actually comes as part of your modeling assumption, right? Not only did you decide to model the image of mu through the link function as a linear model, but really what you're saying-- your model is saying well, you have two pieces of [INAUDIBLE],, the distribution of y. But you also have the fact that mu is modeled as g inverse of x transpose beta. And for different g's, this is just different modeling assumptions, right? So why should this be linear-- I don't know. My authority as a person who has not examined the [INAUDIBLE] data sets for both things would be that the changes are fairly minor. OK, so this was all for one observation. We just, basically, did probability. We described some density, some properties of the densities, how to compute expectations. That was really just probability. There was no data involved at any point. We did a bit of modeling, but it was all for one observation. What we're going to try to do now is given the reverse engineering to probability that is statistics, given data, what can I infer about my model? Now remember, there's three parameters that are floating around in this model. There is one that was theta. There is one that was mu, and there is one that is beta. OK, so those are the three parameters that are floating around. What we said is that the expectation of y, given x, is mu of x. So if I estimate mu, I know the conditional expectation of y, given x, which definitely gives me theta of x. How do I go from mu of x to theta of x? The inverse of what-- of the arrow? Yeah, sure, but how do I go from this guy to this guy? So theta as a function of mu is? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, so we just computed that mu was b prime of theta. So it means that theta is just b prime inverse of mu. So those two things are the same as far as we're concerned, because we know that b prime is strictly increasing it's invertible. So it's just a matter of re-parametrization, and we just can switch from one to the other whenever we want. But why we go through mu, because so far for the entire semester I told you there was one parameter that's theta. It does not have to be the mean, and that's the parameter that we care about. It's the one on which we want to do interference. That's the one for which we're going to compute the Fisher information. This was the parameter that was our object of worship. And now, I'm saying oh, I'm going to have mu that's coming around. And why we have mu, because this is the mu that we use to go to beta. So I can go freely from theta to mu using b prime or b prime inverse. And now, I can go from mu to beta, because I have that g of mu of x is beta transpose x. So in the end, now, this is going to be my object of worship. This is going to be the parameter that matters. Because once I set beta, I set everything else through this chain. So the question is if I start stacking up this pile of parameters-- so I start with my beta, which in turns give me a mu, which in turn, gives me a theta-- can I just have a long, streamlined-- what is the outcome when I actually start writing my likelihood, not as a function of theta, not as a function of mu, but as a function of beta, which is the one at the end of the chain? And hopefully, things are going to happen nicely, and they might no. Yeah? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Is G-- that's my link. G of mu of x-- now, its mu is a function of x, because its conditional on x. So this is really theta of x, mu of x, but b is not a function of x, because it's just something to tells me what the function of x is. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Mu is the conditional expectation of y, given x. It has, actually, a fancy name in the statistics literature. It's called-- anybody knows the name of the function, mu of x, which is a conditional expectation of y, given x? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: That's the regression function. That's actual definition. If I tell you what is the definition of the regression function, that's just the conditional expectation of why, given x. And I could look at any property of the conditional distribution of y given x. I could look at the conditional 95th percentile. I can look at the conditional median. I can look at the conditional [INAUDIBLE] range. I can look at the conditional variance. But I decide to look at the conditional expectation, which is called the regression function. Yes? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh, there's no transpose here. Actually, only Victor-Emmanuel used this prime for transpose, and I found it confusing with the derivatives. So primes here is only a derivative. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh, yeah, sorry, beta transpose x. So you said what? I said that g of mu of x is beta transpose x? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Isn't that the same thing? X is a vector here, right? AUDIENCE: Yeah. PHILIPPE RIGOLLET: So x transpose beta, and beta transpose x are of the same thing. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: So beta looks like this. X looks like this. It's just a simple number. Yeah, you're right. I'm going to start to look at matrices. I'm going to have to be slightly more careful when I do this. OK so let's do the reverse engineering. I'm giving you data. From this data, hopefully, you should be able to get what the conditional-- if I give you an infinite amount of data, you would know exactly, of pairs xy, what the conditional distribution of y given x is. And in particular, you would know what the conditional expectation of y given x is, which means that you would know mu, which means that you would know theta, which means that you would know beta. Now, when I have a finite number of observations, I'm going to try to estimate mu of x. But really, I'm going to go the other way around. Because the fact that I assume, specifically, that mu of x is of the form g of mu of x is x transpose beta, then that means that I only have to estimate beta, which is a much simpler object than the entire regression function. So that's what I'm going to go for. I'm going to try to represent the likelihood, the log likelihood, of my data as a function, not of theta, not of mu, but of beta-- and then, maximize that guy. So now, rather than thinking of just one observation, I'm going to have a bunch of observations. So this might actually look a little confusing, but let's just make sure that we understand each other before we go any further. So I'm going to have observations, x1, y1, all the way to xn, yn, just like in a natural regression problem, except that here my y's might be 0 one valued. They might be positive valued. They might be exponential. They might be anything in the canonical exponential family. OK so I have this thing, and now, what I have is that my observations are x1, y1, xn, yn. And what I want is that I'm going to assume that the conditional expectation of yi, given-- the conditional distribution of yi, given xi, is something that has density. Did I put an i on y-- yeah. I'm not going to deal with the phi and the c now. And why do I have theta i and not theta is because theta i is really a function of xi. So it's really theta i of xi. But what do I know about theta i of xi, it's actually equal to b-- I did this error twice-- b prime inverse of mu of xi. And I'm going to assume that this is of the form beta transpose xi. And this is why I have theta i-- is because this theta i is a function of xi, and I'm going to assume a very simple form for this thing. Sorry, sorry, sorry, sorry-- I should not write it like this. This is only when I have the canonical link. So this is actually equal to b prime inverse of g, of xi transpose beta. Sorry, g inverse-- those two things are actually canceling each other. So as before, I'm going to stack everything into some-- well, actually, I'm not going to stack anything for the moment. I'm just going to give you a peek at what's happening next week, rather than just manipulating the data. So here is how we're going to proceed at this point. Well now, I want to write my likelihood function, not as a function of theta, but as a function of beta, because that's the parameter I'm actually trying to maximize. So if I have a link-- so this thing that matters here, I'm going to call h. By definition, this is going to be h of xi transpose beta. Helena, you have a question? AUDIENCE: Uh, no [INAUDIBLE] PHILIPPE RIGOLLET: So this is just all the things that we know. Theta is just the, by definition of the fact that mu is b prime of theta, the mean is b prime of theta-- it means that theta is b prime inverse of mu. And then, mu is modeled from the systematic component. G of mu is xi transpose beta, so this is g inverse of xi transpose beta. So I want to have b prime inverse of g inverse. This function is a bit annoying to say, so I'm just going to call it h. And when I do the composition of two inverses, the inverse of the composition of those two things in the reverse order-- so h is really the inverse of g, composed with b prime, g of b prime inverse. And now, if I have the canonical link, since I know that g is b prime inverse, this is really just the identity. As you can imagine, this entire thing, which is actually quite complicated-- would just say oh, this thing, actually, does not show up when I have the canonical link. I really just have that theta can be replaced by xi of beta. So think about going back to this guy here. Now, theta becomes only xi transpose beta. That's going to be much more simple to optimize, because remember, when I'm going to log likelihood, this thing is going to go away. I'm going to sum those guys. And so what I'm going to have is something which is essentially linear in beta. And then, I'm going to have this minus b, which is just minus the sum of convex functions of beta. And so I'm going to have to bring in the tools of convex optimization. Now, it's not just going to be take the gradient, set it to 0. It's going to be more complicated to do that. I'm going to have to do that in an iterative fashion. And so that's what I'm telling you, when you look at your log likelihood for all those functions. You sum, the exponential goes away because you had the log, and then, you have all these things here. I kept the b. I kept the h. But if h is the identity, this is the linear function, the linear part, yi times xi transpose beta, minus b of my theta, which is now only xi transpose beta. And that's the function I want to maximize in beta. It's a convex function. When I know what b is, I have an explicit formula for this, and I want to just bring in some optimization. And that's what we're going to do, and we're going to see three different methods, which are really, basically, the same method. It's just an adaptation or specialization of the so-called Newton-Raphson method, which is essentially telling you do iterative local quadratic approximations through your function-- so second order [INAUDIBLE] expansion, minimize this guy, and then do it again from where you were. And we'll see that this can be, actually, implemented using what's called iteratively re-weighted least squares, which means that every step-- since it's just a quadratic, it's going to be just squares in there-- can actually be solved by using a weighted least squares version of the problem. So I'm going to stop here for today. So we'll continue and probably not finish this chapter, but finish next week. And then, I think there's only one lecture. Actually, for the last lecture, what do you guys want to do? Do you want to have doughnuts and cider? Do you want to just have some more outlooking lecture on what's happening post 1975 in statistics? Do you want to have a review for the final exam-- pragmatic people. AUDIENCE: [INAUDIBLE] interesting, advanced topics. PHILIPPE RIGOLLET: You want to do interesting, advanced-- for the last lecture? AUDIENCE: Something that we haven't thought of yet. PHILIPPE RIGOLLET: Yeah, that's, basically, what I'm asking, right-- interesting advanced topics, versus ask me any question you want. Those questions can be about interesting, advanced topics, though. Like, what are interesting, advanced topics? I'm sorry? AUDIENCE: Interesting with doughnuts-- is that OK? PHILIPPE RIGOLLET: Yeah, we can always do the doughnuts. [LAUGHTER] AUDIENCE: As long as there are doughnuts. PHILIPPE RIGOLLET: All right, so we'll do that. So you guys have a good weekend.
MIT_18650_Statistics_for_Applications_Fall_2016
12_Testing_Goodness_of_Fit_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: We're talking about goodness-of-fit tests. Goodness-of-fit tests are, does my data come from a particular distribution? And why would we want to know this? Well, maybe we're interested in, for example, knowing if the zodiac signs of the Fortune 500 CEOs are uniformly distributed. Or maybe we actually have slightly more-- slightly deeper endeavors, such as understanding if you can actually apply the t-test by testing normality of your sample. All right? So we saw that there's the main result-- the main standard test for this. It's called the Kolmogorov-Smirnov test that people use quite a bit. It's probably one of the most used tests out there. And there's other versions of it that I mentioned passing by. There's the Cramer-von Mises, and there's the Anderson-Darling test. Now, how would you pick one of such tests? Well, they're always are going to-- they're always going to have their advantages and disadvantages. And Kolmogorov-Smirnov is definitely the most widely used because-- well, I guess because it's a natural notion of distance between functions. You just look for each point how far they can be, and you just look at the farthest they can be everywhere. Now, Cramer-von Mises involves L2 distance. So if you're not used to Hilbert spaces or notions of Euclidean spaces, at least it's a little more complicated. And then Anderson-Darling is definitely even more complicated. Now, each of these tests is going to be more powerful against other alternatives. So unless you can really guess which alternative you're expecting to see, which you probably don't, because, again, you're in a case where you want to typically declare H0 to be the correct one, then it's really a matter of tossing a coin. Maybe you can run all three of them and just sleep better at night, because all three of them have failed to reject, for example. All right? So as I mentioned, one of the maybe primary goals to test goodness of fit is to be able to check whether we can apply Student's test, right, and if the Student distribution is actually a valid distribution. And for that, we need to have normally distributed data. Now, as I said several times, normally distributed, it's not a specific distribution. It's a family of distributions that's indexed by means and variances. And the way I would want to test if a distribution is normally distributed is, well, I would just look at the most natural normal distribution or Gaussian distribution that my data could follow. That means that's the Gaussian distribution that has the same mean as my data and the same empirical variance as my data, right? And so I'm going to be given some points x1, xn, and I'm going to be asking, are those Gaussian? That means this is equivalent to, say, are they N mu sigma square for some mu sigma squared? And of course, the natural choice is to take mu hat to be-- mu to be equal to mu hat, which is xn bar. And sigma squared to be sigma squared hat to be, well, Sn hat-- Sn-- what we wrote Sn, which is 1/n sum from i equal 1 to n of xi minus xn bar squared. OK? So this is definitely the natural one you would want to test. And maybe you could actually just close your eyes and just stuff that in a Kolmogorov-Smirnov test. OK? So here, there's a few things that don't work. The first one is that Donsker's theorem does not work anymore, right? Donsker's theorem was the one that told us that, properly normalized, this thing would actually converge to the supremum of a Brownian bridge, which is not true. So that's one problem. But there's actually an even bigger problem is that this distribution, we will check in a second, actually does not-- is pivotal itself, right, the statistic is pivotal. It does not have a distribution that depends on the known parameters, which is sort of nice, at least under the null. However, the distribution is not the same as the one that had fixed mu and sigma. The fact that they come from some random variables is actually distorting the distribution itself. And in particular, the quantiles are going to be distorted, and we hinted at that last time. So one other thing I need to tell you, though, is that this thing actually-- so I know there's some-- oh, yeah, that's where there's a word missing. So we compute the quantiles for this test statistic. And so what I need to promise to you is that these quantiles do not depend on any unknown parameter, right? I mean, it's not clear, right? So I want to test whether my data has some Gaussian distribution. So under the null, all I know is that my xi's are Gaussian with some mean mu and some variance sigma, which I don't know. So it could be the case that when I try to understand the distribution of this quantity under the null, it depends on mu and sigma, which I don't know. So we need to check that this is the case. And what's actually our redemption here is actually going to be the supremum. The supremum is going to basically allow us to, say, sup out mu and sigma square. So let's check that, right? So what I'm interested in is this quantity, supremum over t and R of the difference between Fn of t and, what I write, phi mu hat sigma squared of t. So phi mu hat sigma hat squared-- sorry, sigma hat squared-- is the CDF of some Gaussian with mean mu hat and variance sigma hat squared. And so in particular, this thing here, phi hat of mu hat-- sorry, phi hat of mu hat sigma hat squared of t is the probability that some x is less than t, where x follows some N mu hat sigma hat squared. So what it means is that by just the translation and scaling trig that we typically do for Gaussian to turn it into some standard Gaussian, that implies that there exists some z, which is standard Gaussian this time, so mean 0 and variance 1, such that x is equal to sigma hat x-- sorry, z plus mu hat. Agreed? That's basically saying that x has some Gaussian with mean mu and variance sigma squared. And I'm not going to say the hats every single time, OK? So OK, so that's what it means. So in particular, maybe I shouldn't use x here, because x is going to be my actual data. So let me write y. OK? So now what is this guy here? It's basically-- so phi hat. So this implies that phi mu hat sigma hat squared of t is equal to the probability that sigma hat z plus mu hat is less than t, which is equal to the probability that z is less than t minus mu hat divided by sigma hat, right? But now when z is the standard normal, this is really just the cumulative distribution function of a standard Gaussian but evaluated at a point which is not t, but t minus mu hat divided by sigma hat. All right? So in particular, what I know-- so from this what I get-- well, maybe I'll remove that, it's going to be annoying-- I know that phi mu hat sigma hat squared-- sorry-- phi mu hat sigma hat squared of t is simply phi of, say, 0, 1. And that's just the notation. Usually we don't put those, but here it's more convenient. So it's phi 0, 1 of t minus mu hat divided by sigma hat. OK? That's just something you can quickly check. There's this nice way of writing the cumulative distribution function for any mean and any variance in terms of the cumulative distribution function with mean 0 and variance 1. All right? Not too complicated. All right. So I know what I'm going to say is that, OK, I have this sup here. So what I can write is that this thing here is equal to the sup routine R of 1/n. Let me write what Fn is-- sum from i equal 1 to n of the indicator that xi is less than t minus phi 0, 1 of t minus mu hat divided by sigma hat. OK? I actually want to make a change of variable so that this thing I'm going to call mu-- u, sorry. OK? And so I'm going to make my life easier, and I'm going to make it appear here. And so I'm just going to replace this by indicator that xi minus mu hat divided by sigma hat less than t minus mu hat divided by sigma hat, which is sort of useless at this point. I'm just making my formula more complicated. But now I see something here that shows up, and I will call it u, and this is another u. OK? So now what it means is that suping over t, when t ranges from negative infinity to plus infinity, the new range is from negative infinity to plus infinity, right? So this sup, I can actually write-- this suping t I can write as the sup in u, as the indicator that xi minus mu hat divided by sigma hat is less than u minus phi 0, 1 of u. Now, let's pause for one second. Let's see where we're going. What we're trying to show that this thing does not depend on the unknown parameters, say, mu and sigma, which are the mean and the variance of x under the null. To do that, we basically need to make only quantities that are sort of invariant under these values. So I tried to make this thing invariant under anything, and it's just really something that depends on nothing. It's the CDF. It doesn't depend on sigma hat and mu hat anymore. But sigma hat and mu hat will depend on mu and sigma, right? I mean, they're actually good estimators of those guys, so they should be pretty close to them. And so I need to make sure that I'm not actually doing anything wrong here. So the key thing here is going to be to observe that 1/n sum from i equal 1 to n of indicator of xi minus u hat divided by sigma hat less than u, which is the first term that I have in this absolute value, well, this is what-- well, this is equal to 1/n sum from i equal 1 to n of indicator that-- well, now under the null, which is that x follows N mu sigma squared, for some mu and sigma squared that are unknown. But they are here. They exist. I just don't know what they are. Then xi minus mu can be written as sigma zi plus mu minus mu hat divided by sigma hat, where z is equal to x minus mu divided by sigma, right? That's just the same trick that I wrote here. OK? Everybody agree? So I just standardize-- sorry, z-- yeah, so zi is xi minus mu i minus mu divided by sigma. All right? Just a standardization. So now once I write this, I can actually divide everybody by sigma. Right? So I just divided on top here and in the bottom here. So now what I need to check is that the distribution of this guy does not depend on mu or sigma. That's what I claim. What is the distribution of this indicator? It's a Bernoulli, right? And so if I want to understand its distribution, all I need to do is to compute its expectation, which is just the probability that this thing happens. But the probability that this thing happens is actually now depending on mu and sigma. And the reason is that mu is what? Well, it's x bar-- sorry, yeah, so mu hat-- sorry, is xn bar. So mu hat minus mu, which under the null follows N mu sigma square over n, right? That's the property of the average. So when I do mu hat minus mu divided by sigma, this thing is what distribution? It's still a normal. It's a linear transformation of a normal. What are the parameters? AUDIENCE: 0, 1/n. PHILIPPE RIGOLLET: Yeah, 0, 1/n. But this does not depend on mu or sigma, right? Now, I need to check that this guy does not depend on mu or sigma. What is the distribution of sigma hat over sigma? AUDIENCE: It's a chi-square, right? PHILIPPE RIGOLLET: Yeah, it is a chi-square. So this is actually-- sorry, sigma hat squared divided by sigma squared is a chi-square with n minus 1 degrees of freedom. Does not depend on mu or sigma. AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] AUDIENCE: Or sigma hat squared over sigma squared? PHILIPPE RIGOLLET: Yeah, thank you. So this is actually divided by it. So maybe this guy. Let's write it like that. This is the proper way of writing it. Thank you. Right? So now I have those two things. Neither of them depends on mu or sigma. I these two things. There's just one more thing to check. What is it? AUDIENCE: That they're independent? PHILIPPE RIGOLLET: That they're independent, right? Because the dependence in mu and sigma could be hidden in the covariance. It could be the case that the marginal distribution of mu does not depend on mu or sigma, that the marginal distribution of sigma-- of mu hat does not depend on mu and sigma. The marginal distribution of sigma hat does not depend on mu or sigma, but their correlation could depend on mu and sigma. But we also have that if I look at-- so if I look at-- so since mu hat is independent of sigma hat, it means that the joint distribution of mu hat divided by sigma and sigma hat divided by sigma does not depend on blah, blah, blah, on mu and sigma. OK? Agree? It's not in the individual ones, and it's not in the way they interact with each other. It's nowhere. AUDIENCE: [INAUDIBLE] independence be [INAUDIBLE] theorem? PHILIPPE RIGOLLET: Yeah, covariance theorem, right. So that's something we've been using over and again. That's all under the null. If my data is not Gaussian, nothing actually holds. I just use the fact that under the null I'm Gaussian for some mean mu and variance sigma squared. But that's all I care about. When I'm designing a test, I only care about the distribution under the null, at least to control the type I error. Then to control the type II error, then I cross my fingers pretty hard. OK? So now this basically implies what's written on the board, that this distribution, this test statistic, does not depend on any unknown parameters. It's just something that's pivotal. In particular, I could go at the back of a book and check if there's a table for the quantiles of these things, and indeed there are. This is the table that you see. So actually, this is not even in a book. This is in Lilliefors original paper, 1967, as you can tell from the typewriting. And he actually probably was rolling some dice from his office back in the day and was checking that this was-- he simulated it, and this is how he computed those numbers. And here you also have some limiting distribution, which is not the sup of a Brownian motion over 0, 1 of-- sorry, of a Brownian bridge over 0, 1, which is the one that you would see for the Kolmogorov-Smirnov test, but it's something that's slightly different. And as I said, these numbers are actually typically much smaller than the numbers you would get, right? Remember, we got something that was about 0.5, I think, or maybe 0.41, for the Kolmogorov-Smirnov test at the same entrance, which means that using Kolmogorov-Lilliefors test it's going to be harder for you not to reject for the same data. It might be the case that in one case you reject, and in the other one you fail to reject. But the ordering is always that if you fail to reject with Kolmogorov-Lilliefors, you will fail to reject with Kolmogorov-Smirnov, right? There's always one. So that's why people tend to close their eyes and prefer Kolmogorov-Smirnov because it just makes their life easier. OK? So this is called Kolmogorov-Lilliefors. I think there's actually an E here-- sorry, an I before the E. Doesn't matter too much. OK? Are there any questions? Yes? AUDIENCE: Is there like a place you can point to like [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: So the fact that it's actually a different distribution is that here-- so if I actually knew what mu and sigma were, I would do exactly the same thing. But here, rather than having this average with mu and sigma, I would just have the-- with mu hat and sigma hat, I would just have the average with mu and sigma. OK? So what it means is that the key thing is that what I would compare is the 1/n sum of some Bernoullis with parameter. And the parameter here would be the probability that mu-- xi minus mu over sigma is less than u, which is just the probability that phi-- sorry, it's a Bernoulli with probability F of t. Well, let me write what it is, right? So that's minus phi 0, 1 of t. OK? So that's for the K-S test, and then I sup over t, right? That's what I would have had, because this is actually exactly the right thing. Here I would remove the true mean. I would divide by the true standard deviation. So that would actually end up being a standard Gaussian, and that's why I'm allowed to use phi 0, 1 here. Agreed? And these are Bernoullis because they're just indicators. What happens in the Kolmogorov-Lilliefors test? Well, here the Bernoulli, the only thing that's going to change is this guy, right? They still have a Bernoulli. It's just that the parameters of the Bernoulli are weird. The parameters of the Bernoulli looks like it's-- it becomes the probability that some N(0, 1) plus some N(0, 1/n), right, divided by some square root of chi-squared n minus 1 divided by n is less than t. And those things are independent, but those guys are not necessarily independent, right? And so why is this probability changing? Well, because this denominator is actually fluctuating a lot. So that actually makes this probability different. And so that's basically where it comes from, right? So you could probably convince yourself very quickly that this only makes those guys closer. And why does it make those guys closer? No, sorry. It makes those guys farther, right? And it makes those guys farther for a very clear reason, is that the expectation of this Bernoulli is exactly that guy. Here I think it's going to be true as well that the expectation of this Bernoulli is going to be that guy, but the fluctuations are going to be much bigger than just the phi of the Bernoulli. Because the first thing I do is I have a random parameter from my Bernoulli, and then I flip the Bernoulli. So fluctuations are going to be bigger than a Bernoulli. And so when I take the sup, I'm going to have to [INAUDIBLE] them. So it makes things farther apart, which makes it more likely for you to reject. Yeah? AUDIENCE: You also said that if you compare the same-- if you compare the table and you set at the same level, the Lilliefors is like 0.2, and for the Smirnov is at 0.4. PHILIPPE RIGOLLET: Yeah. AUDIENCE: OK. So it means that Lilliefors is harder not to reject? PHILIPPE RIGOLLET: It means that Lilliefors is harder not to reject, yes, because we reject when we're larger than the number. So the number being smaller with the same data, we might be, right? So basically, it looks like this. What we run-- so here we have the distribution for the-- so let's say this is the density for K-S. And then we have the density for Kolmogorov-Lilliefors, K-L. OK? And what the density of K-L looks like, it looks like this, right? And so if I want to squeeze in alpha here, I'm going to have to squeeze in-- and I squeeze in alpha here, then this is the quantile of order 1 minus alp-- well, let's say alpha of the K-L. And this is the quantile alpha of K-S. So now you give me data, and what I do with it, I check whether they're larger than this number. So if I apply K-S, I check whether I'm larger or smaller than this thing. But if I apply Kolmogorov-Lilliefors, I check whether I'm larger or smaller than this thing. So over this entire range of values for my test statistic-- because it is the same test statistic, I just plugged in mu hat and sigma hat-- for this entire range, the two tests have different outcomes. And this is a big range in practice, right? I mean, it's between-- I mean, it's pretty much at scale here. OK? Any other-- yeah? AUDIENCE: [INAUDIBLE] when n goes to infinity, the two tests become the same now, right? PHILIPPE RIGOLLET: Hmmm. AUDIENCE: Looking at that formula-- PHILIPPE RIGOLLET: Yeah, They should become the same very far. Let me see, though, because-- right. So here we have 8-- so here we have, say, for 0.5, we get 0.886. And for-- oh, I don't have it. Yeah, actually, sorry. So you're right. You're totally right. This is the Brownian bridge values. Because in the limit by, say, Slutsky-- sorry, I'm lost. Yeah, these are the values that you get for the Brownian bridge. Because in the limit by Slutsky, this thing is going to have no fluctuation, and this thing is going to have no fluctuation. So they're just going to be pinned down, and it's going to look like as if I did not replace anything. Because in the limit, I know those guys much faster-- the mu hat and sigma hat converge much faster to mu and sigma than the distribution itself, right? So those are actually going to be negligible. You're right. Actually even, I didn't have-- these are actually the numbers I showed you for the bridge, the Brownian bridge, last time, because I didn't have it for the Kolmogorov-Smirnov one. OK? So there's actually-- so those are numerical ways of checking things, right? I give you data. You just crank the Kolmogorov-Smirnov test. Usually you press a 5 on MATLAB. But let's say you actually compute this entire thing, and there's a number that comes out, and you decide whether it's large enough or small enough. Of course, statistical software is going to make your life even simpler by spitting out a p-value, because you can-- I mean, if you can compute quantiles, you can also when compute p-values. And so your life is just fairly easy. You just have red is bad, green is good, and then you can go. The problem is that those are numbers you want to rely on. But let's say you actually reject. Let's say you reject. Your p-value is actually just like slightly below 5%. So you can say, well, maybe I'm just going to change my p-value-- my threshold to 1%, but you might want to see what's happening. And for that you need a visual diagnostic. Like, how do I check if something departs from being normal, for example? How do I check if a distribution-- why is a distribution not a uniform distribution? Why is a distribution not an exponential distribution? There's many, many, right? If I have an exponential distribution and half of my values are negative, for example, well, there's like pretty obvious reasons why it should not be exponential. But it could be the case that it's just the tails are little heavier or there's more concentration at some point. Maybe it has two modes. There's things like this. But the real thing, we don't believe that the Gaussian is so important because it looks like this close to 0. What we like about the Gaussian is that the tails here decay at this rate-- exponential minus x squared over 2 that we described in the maybe first lecture. And in particular, if there were like kinks around here, it wouldn't matter too much. This is not what makes issues for the Gaussian. And so what we want is to have a visual diagnostic that tells us if the tails of my distribution are comparable to the tails of a Gaussian one, for example. And those are what's called quantile-quantile plots, and in particular-- or QQ plots. And the basic QQ plots we're going to be using are the ones that are called normal QQ plots that are comparing your data to a Gaussian distribution, or a normal distribution. But in general, you could be comparing your data to any distribution you want. And the way you do this is by comparing the quantiles of your data, the empirical quantiles, to the quantiles of the actual distribution you're trying to compare yourself to. So this, in a way, is a visual way of performing these goodness-of-fit tests. And what's nice about visual is that there's room for debate. You can see something that somebody else cannot see, and you can always-- because you want to say that things are Gaussian. And we'll see some examples where you can actually say it if you are good at debate, but it's actually going to be clearly not true. All right. So this is a quick and easy check. That's something I do all the time. You give me data, I'm just going to run this. One of the first things I do so I can check if I can start entering the Gaussian world without compromising myself too much. And the idea is to say, well, if F is close to-- if F-- if my data comes from an F, and if I know that Fn is close to F, then rather than computing some norm, some number that tells me how far they are, summarizing how far they are, I could actually plot the two functions and see if they're far apart. So let's think for one second what this kind of a plot would look like. Well, I would go between 0 and 1. That's where everything would happen. Let's say my distribution is the Gaussian distribution. So this is the CDF of N(0, 1). And now I have this guy that shows up, and remember we had this piecewise constant. Well, OK, let's say we get something like this. We get a piecewise constant distribution for Fn, right? Just from this, and even despite my bad skills at drawing, it's clear that it's going to be hard for you to distinguish those two things, even for a fairly large amount of points. Because the problem is going to happen here, and those guys look pretty much the same everywhere you are here. You're going to see differences maybe in the middle, but we don't care too much about those differences. And so what's going to happen is that you're going to want to compare those two things. And this is basically you have the information you want, but visually it just doesn't render very well because you're not scaling things properly. And the way we actually do it is by flipping things around. And rather than comparing the plot of F to the plot of Fn, we compare the plot of Fn inverse to the plot of F inverse. Now, if F goes from the real line to the interval 0, 1, F inverse goes from 0, 1 to the whole real line. So what's going to happen is that I'm going to compare things on some intervals, which is the-- which are the entire real line. And then what values should I be looking at those things at? Well, technically for F, if F is continuous I could look at F inverse for any value that I please, right? So I have F. And if I want to look at F inverse, I pick a point here and I look at the value that it gives me, and that's F inverse of, say, u, right, if this is u. And I could pick any value I want, I'm going to be able to find it. The problem is that when I start to have this piecewise constant thing, I need to decide what value I assign for anything that's in between two jumps, right? And so I can choose whatever I want, but in practice it's just going to be things that I myself decide. Maybe I can decide that this is the value. Maybe I can decide that the value is here. But for all these guys, I'm going to pretty much decide always the same value, right? If I'm in between-- for this value u, for this jump the jump is here. So for this value, I'm going to be able to decide whether I want to go above or below, but it's always this value that's going to come out. So rather than picking values that are in between, I might as well just pick only values for which this is the value that it's going to get. And those values are exactly 1/n, 2/n, 3/n, 4/n. It's all the way to n/n, right? That's exactly where the flat parts are. We know we jump from 1/n every time. And so that's exactly the recipe. It says look at those values, 1/n, 2/n, 3/n until, say, n minus 1 over n. And for those values, compute the inverse of both the empiricial CDF and the true CDF. Now, for the empirical CDF, it's actually easy. I just told you this is basically where the points-- where the jumps occur. And the jumps occur where? Well, exactly at my observations. Now, remember I need to sort those observations to talk about them. So the one that occurs for the i-th jump is the i-th largest observation, which we denoted by X sub (i). Remember? We had this formula that we said, well, we have x1, xn. These are my data. And what I'm going to sort them into is x sub (1), which is less than or equal to x sub (2), which is less than x sub (n). OK? So we just ordered them from smallest to largest. And then now we've done that, we just put this parenthesis notation. So in particular, Fn inverse of i/n is the location where the i-th jumps occur, which is the i-th largest observation. OK? So for this guy, these values, the y-axes are actually fairly easy. I know it's basically my ordered observations. The x-values are-- well, that depends on the function F I'm trying to test. If it's the Gaussian, it's just the quantile of order 1 minus 1/n, right? It's this Q1 minus 1/n here that I need to compute. It's the inverse of the cumulative distribution function, which, given the formula for F, you can actually compute or maybe estimate fairly well. But it's something that you can find in tables. Those are basically quantiles. Inverse of CDFs are quantiles, right? And so that's basically the things we're interested in. That's why it's called quantile-quantile. Those are sometimes referred to as theoretical quantiles, the one we're trying to test, and empirical quantiles, the one that corresponds to the empirical CDF. And so I'm plotting a plot where the x-axis is quantile. The y-axis is quantile. And so I call this plot a quantile-quantile plot, or QQ plot, because, well, just say 10 times quantile-quantile, and then you'll see why. Yeah? AUDIENCE: [INAUDIBLE] have to have the [INAUDIBLE]?? PHILIPPE RIGOLLET: Well, that's just-- we're back to the-- we're back to the goodness-of-fit test, right? So if you look-- so you don't do it yourself. That's the simple answer. You don't-- I'm just telling you how those plots are going to be seen spit out from a software are going to look like. Now, depending on the software, there's a different thing that's happening. Some softwares are actually plotting F with the right-- let's say you want to do normal, as you asked. So some software are just going to use F to be with mu hat and sigma hat, and that's fine. Some software are actually not going to do this. They're just going to use a Gaussian. But then they're going to actually have a different reference point. So what do we want to see here? What should happen if all these points-- if all my points actually come from F, from a distribution that has CDF F? What should happen? What should I see? Well, since Fn should be close to F, Fn inverse should be close to F inverse, which means that this point should be close to that point. This point should be close to that point. So ideally, if I actually pick the right F, I should see a plot that looks like this, something where all my points are very close to the line y is equal to x, right? And I'm going to have some fluctuations, but something very close to this. Now, that's if F is exactly the right one. If F is not exactly the right one, in particular, in the case of a Gaussian one, if I actually plotted here the quantiles-- so if I plotted F 0, 1 of t, right? So let's say those are the ones I actually plot, but I really don't know what-- mu hat is not 0 and sigma hat is not 0. And so this is not the one I should be getting. Since we actually know that phi of mu hat sigma hat squared t is equal to phi 0, 1 of t minus mu hat divided by sigma hat, there's just this change of axis, which is actually very simple. This change of axis is just a simple translation scaling, which means that this line here is going to be transformed into another line with a different slope and a different intercept. And so some software will actually decide to go with this curve and just show you what the reference curve should be, rather than actually putting everything back onto the 45-degree curve. AUDIENCE: So if you get any straight line? PHILIPPE RIGOLLET: Any straight line, you're happy. I mean, depending on the software. Because if the software actually really rescaled this thing to have mu hat and sigma square and you find a different line, a different straight line, this is bad news, which is not going to happen actually. It's impossible that happens, because you actually-- well, it could. If it's crazy, it could. It shouldn't be very crazy. OK. So let's see what R does for us, for example. So here in R, R actually does this funny trick where-- so here I did not actually plot the lines. I should actually add the lines. So the command is like qqnorm of my sample, right? And that's really simple. I just stack all my data into some vector, say, x. And I say qqnorm of x, and it just spits this thing out. OK? Very simple. But I could actually add another command, which I can't remember. I think it's like qqline, and it's just going to add the line on top of it. But if you see, actually what R does for us, it's actually doing the translation and scaling on the axes themselves. So it actually changes the x and y-axis in such a way that when you look at your picture and you forget about what the meaning of the axes are, the relevant straight line is actually still the 45-degree line. It's Because it's actually done the change of units for you. So you don't have to even see the line. You know that, in your mind, that this is basically-- the reference line is still 45 degree because that's the way the axes are made. But if I actually put my axes, right-- so here, for example, it goes from-- let's look at some-- well, OK, those are all square. Yeah, and that's probably because they actually have-- the samples are actually from a standard normal. So I did not make my life very easy to illustrate your question, but of course, I didn't know you were going to ask it. Next time, let's just prepare. Let's script more. We'll see another one in the next plot. But so here what you expect to see is that all the plots should be on the 45-degree line, right? This should be the right one. And if you see, when I start having 10,000 samples, this is exactly what's happening. So this is as good as it gets. This is an N(0, 1) plotted against the theoretical quantile of an N(0, 1). As good as it gets. And if you see, for the second one, which is 50, sample size of size-- sample of size 50, there is some fudge factor, right? I mean, those things-- doesn't look like there's a straight line, right? It sort of appears that there are some weird things happening here at the lower tail. And the reason why this is happening is because we're trying to compare the tails, right? When I look at this picture, the only thing that goes wrong somehow is always at the tip, because those are sort of rare and extreme values, and they're sort of all over the place. And so things are never really super smooth and super clean. So this is what your best shot is. This is what you will ever hope to get. So size 10, right, so you have 10 points. Remember, we actually-- well, I didn't really tell you how to deal with the extreme cases. Because the problem is that F inverse of 1 for the true F is plus infinity. So you have to make some sort of weird boundary choices to decide what F inverse of 1 is, and it's something that's like somewhere. But you still want to put like 10 dots, right? 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 dots. So I have 10 observations, you will see 10 dots. I have 50 observations, you will see 50 dots, right, because I have-- there are 1/n, 2/n, 3/n all the way to n/n. I didn't tell you the last one. OK. So this is when things go well, and this is when things should not go well. OK? So here, actually, the distribution is a Student's t with 15 degrees of freedom, which should depart somewhat from a Gaussian distribution. The tails should be heavier. And what you can see is basically the following, is that for 10 you actually see something that's crazy, right, if I do 10 observations. But if I do 50 observations, honestly, it's kind of hard to say that it's different from the standard normal. So you could still be happy with this for 100. And then this is what's happening for 10,000. And even here it's not the beautiful straight line, but it feels like you would be still tempted to conclude that it's a beautiful straight line. So let's try to guess. So basically, there's-- for each of those sides there's two phenomena. Either it goes like this or it goes like this, and then it goes like this or it goes like this. Each side corresponds to the left tail, all the smallest values. So that's the left side. And that's the right side-- corresponds to the large values. OK? And so basically you can actually think of some sort of a table that tells you what your QQ plot looks like. And so let's say it looks-- so we have our reference 45-degree line. So let's say this is the QQ plot. That could be one thing. This could be the QQ plot where I have another thing. Then I can do this guy, and then I do this guy. So this is like this. OK? So those are the four cases. OK? And here what's changing is the right tail, and here what's changing is the-- and when I go from here to here, what changes is the left tail. Is that true? No, sorry. What changes here is the right tail, right? It's this part that changes from top to bottom. So here it's something about right tail, and here that's something about left tail. Everybody understands what I mean when I talk about tails? OK. And so here it's just going to be a question of whether the tails are heavier or lighter than the Gaussian. Everybody understand what I mean when I say heavy tails and light tails? OK. So right, so heavy tails just means that basically here the tails of this guy are heavier than the tails of this guy. So it means that if I draw them, they're going to be above. Actually, I'm going to keep this picture because it's going to be very useful for me. When I plug the quantiles at the same-- so let's look at the right tail, for example. Right here my picture is for right tails. When I look at the quantiles of my theoretical distribution-- so here you can see the bottom curve we have the theoretical quantiles, and those are the empirical quantiles. If I look to the right here, are the theoretical quantiles larger or smaller than the empirical quantiles? Let me phrase it the other-- are the empirical quantiles larger or smaller than the theoretical quantiles? AUDIENCE: This is a graph of quantiles, right? So if it's [INAUDIBLE] it should be smaller. PHILIPPE RIGOLLET: It should be smaller, right? On this line, they are equal. So if I see the empirical quantile showing up here, it means that here the empirical quantile is less than the theoretical quantile. Agree? So that means that if I look at this thing-- and that's for the same values, right? So the quantiles are computed for the same values i/n. So it means that the empirical quantiles should be looking-- so that should be the empirical quantile, and that should be the theoretical quantile. Agreed? Those are the smaller values for the same alpha. So that implies that the tails-- the right tail, is it heavy or lighter-- heavier or lighter than the Gaussian? AUDIENCE: Lighter. PHILIPPE RIGOLLET: Lighter, right? Because those are the tails of the Gaussian. Those are my theoretical quantiles. That means that this is the tail of my empirical distribution. So they are actually lighter. OK? So here, if I look at this thing, this means that the right tail is actually light. And by light, I mean lighter than Gaussian. Heavy, I mean heavier than Gaussian. OK? OK, now we can probably do the entire thing. Well, if this is light, this is going to be heavy, right? That's when I'm above the curve. Exercise-- is this light or is this heavy, the first column? And it's OK. It should take you at least 30 seconds. AUDIENCE: [INAUDIBLE] different column? PHILIPPE RIGOLLET: Yeah, this column, right? So this is something that pertains-- this entire column is going to tell me whether the fact that this guy is above, does this mean that I have lighter or heavier left tails? AUDIENCE: Well, on the left, it's heavier. PHILIPPE RIGOLLET: On the left, it's heavier. OK. I don't know. Actually, I need to draw a picture. You guys are probably faster than I am. AUDIENCE: [INTERPOSING VOICES]. PHILIPPE RIGOLLET: Actually, let me check how much randomness is-- who says it's lighter? Who says it's heavier? AUDIENCE: Yeah, but we're biased. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, OK. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: All right. So let's see if it's heavier. So we're on the left tail, and so we have one looks like this, one looks like that, right? So we know here that I'm looking at this part here. So it means that here my empirical quantile is larger than the theoretical quantile. OK? So are my tails heavier or lighter? They're lighter. That was a bad bias. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Right? It's below, so it's lighter. Because the problem is that larger for the negative ones means that it's smaller [INAUDIBLE],, right? Yeah? AUDIENCE: Sorry but, what exactly are these [INAUDIBLE]?? If this is the inverse-- if this is the inverse CDF, shouldn't everything-- well, if this is the inverse CDF, then you should only be inputting values between 0 and 1 in it. And-- PHILIPPE RIGOLLET: Oh, did I put the inverse CDF? AUDIENCE: Like on the previous slide, I think. PHILIPPE RIGOLLET: No, the inverse CDF, yeah, so I'm inputting-- AUDIENCE: Oh, you're [INAUDIBLE].. PHILIPPE RIGOLLET: Yeah, so it's a scatter plot, right? So each point is attached-- each point is attached 1/n, 2/n, 3/n. Now, for each point I'm plotting, that's my x-value, which maps a number between 0 and 1 back onto the entire real line, and my y-value is the same. OK? So what it means is that those two numbers, this is in the-- this lives on the entire real line, not on the interval. This lives on the entire real line, not in the interval. And so my QQ plots take values on the entire real line, entire real line, right? So you think of it as a parameterized curve, where the time steps are 1/n, 2/n, 3/n, and I'm just like putting a dot every time I'm making one step. OK? OK, so what did we say? That was lighter, right? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: OK? One of my favorite exercises is, here's a bunch of densities. Here's a bunch of QQ plots. Map the correct QQ plot to its own density. All right? And there won't be mingled lines that allow you to do that, then you just have to follow, like at the back of cereal boxes. All right. Are there any questions? So one thing-- there's two things I'm trying to communicate here is if you see a QQ plot, now you should understand, one, how it was built, and two, whether it means that you have heavier tails or lighter tails. Now, let's look at this guy. What should we see? We should see heavy on the left and heavy on the right, right? We know that this should be the case. So this thing actually looks like this, and it sort of does, right? If I take this line going through here, I can see that this guy's tipping here, and this guy's dipping here. But honestly-- actually, I can't remember exactly, but t 15, if I plotted the density on top of the Gaussian, you can see a difference. But if I just gave it to you, it would be very hard for you to tell me if there's an actual difference between t 15 and Gaussian, right? Those things are actually very close. And so in particular, here we're really trying to recognize what the shape is the fact-- right? So t 15 compared to a standard Gaussian was different, but t 15 compared to a Gaussian with a slightly larger variance is not going to actually-- you're not going to see much of a difference. So in a way, such distributions are actually not too far from the Gaussian, and it's not too-- it's still pretty benign to conclude that this was actually a Gaussian distribution because you can just use the variance as a little bit of a buffer. I'm not going to get really into how you would use a t-distribution into a t-test, because it's kind of like Inception, right? So but you could pretend that your data actually is t-distributed and then build a t-distribution from it, but let's not say that. Maybe that was a bad example. But there's like other heavy-tailed distributions like Cauchy distribution, which doesn't even have a-- it's not even integrable because that's as heavy as the tails get. And this you can really tell it's going to look like this. It's going to be like pfft. What does a uniform distribution look like? Like this? It's going to be-- it's going to look like a Gaussian one, right? So a uniform-- so this is my Gaussian. A uniform is basically going to look like this, one side take the right mean and the right variance, right? So the tails are definitely lighter. They're 0. That's as lighter as it gets. So the light-light is going to look like this S shape. So an S-- light-tailed distribution has this S shape. OK? What is the exponential going to look like? So the exponential is positively supported. It only has positive numbers. So there's no left tail. This is also as light as it gets. But the right tail, is it heavier or lighter than the Gaussian? AUDIENCE: Heavier. PHILIPPE RIGOLLET: It's heavier, right? It's only the case like e of the minus x rather e to the minus x squared. So it's heavier. So it means that on the left it's going to be light, and on the right it's going to be heavy. So it's going to be U-shaped. OK? That will be fine. All right. Any other question? Again, two messages, like, more technical, and you can sort of fiddle with it by looking at it. You can definitely conclude that this is OK enough to be Gaussian for your purposes. Yeah? AUDIENCE: So [INAUDIBLE] PHILIPPE RIGOLLET: I did not hear the "if" at the beginning of your sentence. AUDIENCE: I would want to be lighter tail, right, because that'll be-- it's easier to reject? Is that correct? PHILIPPE RIGOLLET: So what is your purpose as a-- AUDIENCE: I want to-- I have some [INAUDIBLE] right? I want to be able to say I reject H0 [INAUDIBLE].. PHILIPPE RIGOLLET: Yes. AUDIENCE: So if you wanted to make it easier to reject H0, then-- PHILIPPE RIGOLLET: Yeah, in a way that's true, right? So once you've actually factored in the mean and the variance, the only thing that actually-- right. So if you have Gaussian tails or lighter-- even lighter tails, then it's harder for you to explain deviations from randomness only, right? If you have a uniform distribution and you see something which is-- if you're uniform on 0, 1 plus some number and you see 25, you know this number is not going to be 0, right? So that's basically as good as it gets. And there's basically some smooth interpolation if you have lighter tails. Now, if you start having something that has heavy tails, then it's more likely that pure noise will generate large observations and therefore discovery. So yes, lighter tails is definitely the better-behaved noise. Let's put it this way. The lighter it is, the better behaved it is. Now, this is good-- this is good for some purposes, but when you want to compute actual quantiles, like exact quantiles, then it is true in general that the quantiles of lighter-tail distributions are going to be dominated by the-- are going to be dominated by the-- let's say on the right tails, are going to be dominated by those of a heavy distribution. That is true. But that's not always the case. And in particular, there's going to be some like sort of weird points where things are actually changing depending on what level you're actually looking at those things, maybe 5% or 10%, in which case things might be changing a little bit. But if you started going really towards the tail, if you start looking at levels alpha which are 1% or 0.1%, it is true that it's always-- if you can actually-- so if you see something that looks light tail, you definitely do not want to conclude that it's Gaussian. You want to actually change your modeling so that it makes your life even easier. And you actually factor in the fact that you can see that the noise is actually more benign than you would like it to be. OK? Stretching fingers, that's it? All right. OK. So I want to-- I mentioned at some point that we had this chi-square test that was showing up. And I do not know what I did-- let's just-- oh, yeah. So we have this chi-square test that we worked on last time, right? So the way I introduced the chi-square test is by saying, I am fascinated by this question. Let's check if it's correct, OK? Or something maybe slightly deeper-- let's check if juries in this country are representative of racial distribution. But you could actually-- those numbers here come from a very specific thing. That was the uniform. That was our benchmark. Here's the uniform. And there was this guy, which was a benchmark, which was the actual benchmark that we need to have for this problem. And those things basically came out of my hat, right? Those are numbers that exist. But in practice, you actually make those numbers yourself. And the way you do it is by saying, well, if I have a binomial distribution and I want to test if my data comes from a binomial distribution, you could ask this question, right? You have a bunch of data. I did not promise to you that this was the sum of independent Bernoullis and [INAUDIBLE].. And then you can actually check that it's a binomial indeed, and you have binomial. If you think about where you've encountered binomials, it was mostly when you were drawing balls from urns, which you probably don't do that much in practice. OK? And so maybe one day you want to model things as a binomial, or maybe you want to model it as a Poisson, as a limiting binomial, right? People tell you photons arrive-- the rate of a photon hitting some surface is actually a Poisson distribution, right? That's where they arise a lot in imaging. So if I have a colleague who's taking pictures of the skies over night, and he's like following stars and it's just like moving around with the rotation of the Earth. And he has to do this for like eight hours because he needs to get enough photons over this picture to actually arise. And he knows they arrive at like a Poisson process, and you know, chapter 7 of your probability class, I guess. And And there's all these distributions outside the classroom you probably want to check that they're actually correct. And so the first one you might want to check, for example, is a binomial. So I give you a distribution, a binomial distribution on, say, K trials, and you have some number p. And here, I don't know typically what p should be, but let's say I know it or estimate it from my data. And here, since we're only going to deal with asymptotics, just like it was the case for the Kolmogorov-Smirnov one, in the asymptotic we're going to be able to think of the estimated p as being a true p, OK, under the null at least. So therefore, each outcome, I can actually tell you what the probability of a binomial-- is this outcome. For a given K and a given p, I can tell you exactly what a binomial should give you as the probability for the outcome. And that's what I actually use to replace the numbers 1/12, 1/12, 1/12, 1/12 or the numbers 0.72, 0.7, 0.12, 0.9. All these numbers I can actually compute using the probabilities of a binomial, right? So I know, for example, that the probability that a binomial np is equal to, say, K is n choose K p to the K 1 minus p to the n minus K. OK? I mean, so these are numbers. If you give me p and you give me n, I can compute those numbers for all K from 0 to n. And from this I can actually build a table. All right? So for each K-- 0. So K is here, and from 0, 1, et cetera, all the way to n, I can compute the true probability, which is the probability that my binomial np is equal to 0, the probability that my binomial is equal to 1, et cetera, all the way to n. I can compute those numbers. Those are actually going to be exact numbers, right? I just plug in the formula that I had. And then I'm going to have some observed. So that's going to be p hat, 0, and that's basically the proportion of 0's, right? So here you have to remember it's not a one-time experiment like you do in probability where you say, I'm going to draw n balls from an urn, and I'm counting how many-- how many I have. This is statistics. I need to be able to do this experiment many times so I can actually, in the end, get an idea of what the proportion of p's is. So you have not just one binomial, but you have n binomials. Well, maybe I should not use n twice. So that's why it's the K here, right? So I have a binomial [INAUDIBLE] at Kp and I just seize n of those guys. And with this n of those guys, I can actually estimate those probabilities. And what I'm going to want to check is if those two probabilities are actually close to each other. But I already know how to do this. All right? So here I'm going to test whether P is in some parametric family, for example, binomial or not binomial. And testing-- if I know that it's a binomial [INAUDIBLE],, and I basically just have to test if P is the right thing. OK? Oh, sorry, I'm actually lying to you here. OK. I don't want to test if it's binomial. I want to test the parameter of the binomial here. OK? So I know-- no, sorry, [INAUDIBLE] sorry. OK. So I want to know if I'm in some family, the family of binomials, or not in the family of binomials. OK? Well, that's what I want to do. And so here H0 is basically equivalent to testing if the pj's are the pj's that come from the binomial. And the pj's here are the probabilities that I get. This is the probability that I get j successes. That's my pj. That's j's value here. OK? So this is the example, and we know how to do this. We construct p hat, which is the estimated proportion of successes from the observations. So here now I have n trials. This is the actual maximum likelihood estimator. This becomes a multinomial experiment, right? So it's kind of confusing. We have a multinomial experiment for a binomial distribution. The binomial here is just a recipe to create some test probabilities. That's all it is. The binomial here doesn't really matter. It's really to create the test probabilities. And then I'm going to define this test statistic, which is known as the chi-square statistic, right? This was the chi-square test. We just looked at sum of the square root of the differences. Inverting the covariance matrix or using the Fisher information with removing the part that was not invertible led us to actually use this particular value here, and then we had to multiply by n. OK? And that, we know, converges to what? A chi-square distribution. So I'm not going to go through this again. I'm just telling you you can use the chi-square that we've seen, where we just came up with the numbers we were testing. Those numbers that were in this row for the true probabilities, we came up with them out of thin air. And now I'm telling you you can actually come up with those guys from a binomial distribution or a Poisson distribution or whatever distribution you're happy with. Any question? So now I'm creating this thing, and I can apply the entire theory that I have for the chi-square and, in particular, that this thing converges to a chi-square. But if you see, there's something that's different. What is different? The degrees of freedom. And if you think about it, again, the meaning of degrees of freedom. What does this word-- these words actually mean? It means, well, to which extent can I play around with those values? What are the possible values that I can get? If I'm not equal to this particular value I'm testing, how many directions can I be different from this guy? And when we had a given set of values, we could be any other set of values, right? So here, I had this-- I'm going to represent-- this is the set of all probability distributions of vectors of size K. So here, if I look at one point in this set, this is something that looks like p1 through pK such that their sum-- such that they're non-negative, and the sum p1 through pK is equal to 1. OK? So I have all those points here. OK? So this is basically the set that I had before. I was testing whether I was equal to this one guy, or if I was anything else. And there's many ways I can be anything else. What matters, of course, is what's around this guy that I could actually confuse myself with. But there's many ways I can move around this guy. Agreed? Now I'm actually just testing something very specific. I'm saying, well, now the piece that I have had to come from this-- have to be constructed from this formula, this parametric family P of theta. And there's a fixed way for-- let's say this is theta, so I have a theta here. There's not that many ways this can actually give me a set of probabilities, right? I have to move to another theta to actually start being confused. And so here the number of degrees of freedom is basically, how can I move along this family? And so here, this is all the points, but there might be just the subset of the points that looks like this, just this curve, not the half of this thing. And those guys on this curve are the p thetas, and that's for all thetas when theta runs across data. So in a way, this is just a much smaller dimensional thing. It's a much smaller object. Those are only the ones that I can create that are exactly of this very specific parametric form. And of course, not all are of this form. Not all probability PMFs are of this form. And so that is going to have an effect on what my PMF is going to be-- sorry, on what my-- sorry, what my degrees of freedoms are going to be. Because when this thing is very small, that means when-- that's happening when theta is actually, say, a one-dimensional space, then there's still many ways I can escape, right? I can be different from this guy in pretty much every other direction, except for those two directions, just when I move from here or when I move in this direction. But now if this thing becomes bigger, your theta is, say, two dimensional, then when I'm here it's becoming harder for me to not be that guy. If I want to move away from it, then I have to move away from the board. And so that means that the bigger the dimension of my theta, the smaller the degrees of freedoms that I have, OK, because moving out of this parametric family is actually very difficult for me. So if you think, for example, as an extreme case, the parametric family that I have is basically all PMFs, all of them, right? So that's a stupid parametric family. I'm indexed by the distribution itself, but it's still finite dimensional. Then here, I have basically no degrees of freedom. There's no way I can actually not be that guy, because this is everything I have. And so you don't have to really understand how the computation comes into the numbers of dimension and what I mean by dimension of this current space. But really, what's important is that as the dimension of theta becomes bigger, I have less degrees of freedom to be away from this family. This family becomes big, and it's very hard for me to violate this. So it's actually shrinking the number of degrees of freedom of my chi-square. And that's all you need to understand. When d increases, the number of degrees of freedom decreases. And I'd like to you to have an idea of why this is somewhat true, and this is basically the picture you should have in mind. OK. So now once I have done this, I can just construct. So here I need to check. So what is d in the case of the binomial? AUDIENCE: 1. PHILIPPE RIGOLLET: 1, right? It's just a one-dimensional thing. And for most of the examples we're going to have it's going to be one dimensional. So we have this weird thing. We're going to have K minus 2 degrees of freedom. So now I have this thing, and I have this asymptotic. And then I can just basically use a test that has-- that uses the fact that the asymptotic distribution is this. So I compute my quantiles out of this. Again, I made the same mistake. This should be q alpha, and this should be q alpha. So that's just the tail probability is equal to alpha when I'm on the right of q alpha. And so those are the tail probability of the appropriate chi-square with the appropriate number of degrees of freedom. And so I can compute p-values, and I can do whatever I want. OK? So then I just like [INAUDIBLE] my testing machinery. OK? So now I know how to test if I'm a binomial distribution or not. Again here, testing if I'm a binomial distribution is not a simple goodness of fit. It's a composite one where I can actually-- there's many ways I can be a binomial distribution because there's as many as there is theta. And so I'm actually plugging in the theta hat, which is estimated from the data, right? And here, since everything's happening in the asymptotics, I'm not claiming that Tn has a pivotal distribution for finite n. That's actually not true. It's going to depend like crazy on what the actual distribution is. But asymptotically, I have a chi-square, which obviously does not depend on anything [INAUDIBLE].. OK? Yeah? AUDIENCE: So in general, for the binomial [INAUDIBLE] trials. But in the general case, the number of-- the size of our PMF is the number of [INAUDIBLE].. PHILIPPE RIGOLLET: Yeah. AUDIENCE: So let's say that I was also uncertain about what K was so that I don't know how big my [INAUDIBLE] is. [INAUDIBLE] PHILIPPE RIGOLLET: That is correct. And thank you for this beautiful segue into my next slide. So we can actually deal with the case not only where it's infinite, which would be the case of Poisson. I mean, nobody believes I'm going to get an infinite number of photons in a finite amount of time. But we just don't want to have to say there's got to be a-- this is the largest possible number. We don't want to have to do that. Because if you start doing this and the probabilities become close to 0, things become degenerate and it's an issue. So what we do is we bin. We just bin stuff. OK? And so maybe if I have a binomial distribution with, say, 200,000 possible values, then it's actually maybe not the level of precision I want to look at this. Maybe I want to bin. Maybe I want to say, let's just think of all things that are between 0 and 100 to be the same thing, between 100 and 200 the same thing, et cetera. And so in fact, I'm actually going to bin. I don't even have to think about things that are discrete. I can even think about continuous cases. And so if I want to test if I have a Gaussian distribution, for example, I can just approximate that by some, say, piecewise constant function that just says that, well, if I have a Gaussian distribution like this, I'm going to bin it like this. And I'm going to say, well, the probability that I'm less than this value is this. The probability that I'm between this and this value is this. The probability I'm between this and this value is this, and then this and then this, right? And now I've turned-- I've discretized, effectively, my Gaussian into a PMF. The value-- this is p1. The value here is p1. This is p2. This is p3. This is p4. This is p5 and p6, right? I have discretized my Gaussian into six possible values. That's just the probability that they fall into a certain bin. And we can do this-- if you don't know what K is, just stop at 10. You look at your data quickly and you say, well, you know, I have so few of them that are-- like I see maybe one 8, one 11, and one 15. Well, everything that's between 8 and 20 I'm just going to put it in one bin. Because what else are you going to do? I mean, you just don't have enough observations. And so what we do is we just bin everything. So here I'm going to actually be slightly abstract. Our bins are going to be intervals Aj. So here-- they don't even have to be intervals. I could go crazy and just like call the bin this guy and this guy, right? That would make no sense, but I could do that. And then I'm-- and of course, you can do whatever you want, but there's going to be some consequences in the conclusions that you can take, right? All you're going to be able to say is that my distribution does not look like it could be binned in this way. That's all you're going to be able to say. So if you decide to just put all the negative numbers and the positive numbers, then it's going to be very hard for you to distinguish a Gaussian from a random variable that takes values of minus 1 and plus 1 only. You need to just be reasonable. OK? So now I have my pj's become the probability that my random variable falls into bin j. So that's pj of theta under the parametric distribution. For the true one, whether it's parametric or not, I have a pj. And then I have p hat j, which is the proportion of observations that falls in this bin. All right? So I have a bunch of observations. I count how many of them fall in this bin. I divide by n, and that tells me what my estimated probability for this bin is. And theta hat, well, it's the same as before. If I'm in a parametric family, I'm just estimating theta hat, maybe the maximum likelihood estimator, plug it in, and estimate those pj's of theta hat. From this, I form my chi-square, and I have exactly the same thing as before. So the answer to your question is, yes, you bin. And it's the answer to even more questions. So that's why there you can actually use the chi-square test to test for normality. Now here it's going to be slightly weaker, because there's only an asymptotic theory, whereas Kolmogorov-Smirnov and Kolmogorov-Lilliefors work actually even for finite samples. For the chi-square test, it's only asymptotic. So you just pretend you actually know what the parameters are. You just stuff them into a theta, a mu hat, and sigma square hat. And you just go to-- you just cross your finger that n is large enough for everything to have converged by the time you make your decision. OK? And then this is a copy/paste, with the same error actually as the previous slide, where you just build your test based on whether you exceed or not some quantile, and you can also compute some p-value. OK? AUDIENCE: The error? PHILIPPE RIGOLLET: I'm sorry? AUDIENCE: What's the error? PHILIPPE RIGOLLET: What is the error? AUDIENCE: You said [INAUDIBLE] copy/paste [INAUDIBLE].. PHILIPPE RIGOLLET: Oh, the error is that this should be q alpha, right? AUDIENCE: OK. PHILIPPE RIGOLLET: I've been calling this q alpha. I mean, that's my personal choice, because I don't want to-- I only use q alpha. So I only use quantiles where alpha is to the right, so. That's what statisticians-- probabilists would use this notation. OK. And so some questions, right? So of course, in practice you're going to have some issues which translate. I say, well, how do you pick this guy, this K? So I gave you some sort of a-- I mean, the way we discussed, right? You have 8 and 10 and 20, then it's ad hoc. And so depending on whether you want to stop K at 20 or if you want to bin those guys is really up to you. And there's going to be some considerations about the particular problem at hand. I mean, is it coarse-- too coarse for your problem to decide that the observations between 8 and 20 are the same? It's really up to you. Maybe that's actually making a huge difference in terms of what phenomenon you're looking at. The choice of the bins, right? So here there's actually some sort of rules, which are don't use only one bin and make sure there's actually-- don't use them too small so that there's at least one observation per bin, right? And it's basically the same kind of rules that you would have to build a histogram. If you were to build a histogram for your data, you still want to make sure that you bin in an appropriate fashion. OK? And there's a bunch of rule of thumbs. Every time you ask someone, they're going to have a different rule of thumb, so just make your own. And then there's the computation of pj of theta, which might be a bit complicated because, in this case, I would have to integrate the Gaussian between this number and this number. So for this case, I could just say, well, it's the difference of the CDF in that value and that value and then be happy with it. But you can imagine that you have some slightly more crazy distributions. You're going to have to somewhat compute some integrals that might be unpleasant for you to compute. OK? And in particular, I said the difference of the PDF between that value and that value of-- sorry, the CDF between that value and that value, it is true. But it's not like you actually have tables that compute the CDF at any value you like, right? You have to sort of-- well, there might be but at some degree, but you are going to have to use a computer typically to do that. OK? And so for example, you could do the Poisson. If I had time, if I had more than one minute, I would actually do it for you. But it's basically the same. The Poisson, you are going to have an infinite tail, and you just say, at some point I'm going to cut everything that's larger than some value. All right? So you can play around, right? I say, well, if you have extra knowledge about what you expect to see, maybe you can cut at a certain number and then just fold all the largest values from K minus 1 to infinity so that you actually have-- you have everything into one large bin. OK? That's the entire tail. And that's the way people do it in insurance companies, for example. They assume that the number of accidents you're going to have is a Poisson distribution. They have to fit it to you. They have to know-- or at least to your pool of insurance of injured people. So they just slice you into what your character-- relevant characteristics are, and then they want to estimate what the Poisson distribution is. And basically, they can do a chi-square test to check if it's indeed a Poisson distribution. All right. So that will be it for today. And so I'll be-- I'll have your homework--
MIT_18650_Statistics_for_Applications_Fall_2016
4_Parametric_Inference_cont_and_Maximum_Likelihood_Estimation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: --124. If I were to repeat this 1,000 times, so every one of those 1,000 times they collect 124 data points and then I'd do it again and do it again and again, then in average, the number I should get should be close to the true parameter that I'm looking for. The fluctuations that are due to the fact that I get different samples every time should somewhat vanish. And so what I want is to have a small bias, hopefully a 0 bias. If this thing is 0, then we see that the estimator is unbiased. So this is definitely a property that we are going to be looking for in an estimator, trying to find them to be unbiased. But we'll see that it's actually maybe not enough. So unbiasedness should not be something you lose your sleep over. Something that's slightly better is the risk, really the quadratics risk, which is expectation of-- so if I have an estimator, theta hat, I'm going to look at the expectation of theta hat n minus theta squared. And what we showed last time is that we can actually-- by inserting in there, adding and removing the expectation of theta hat, we actually get something where this thing can be decomposed as the square of the bias plus the variance, which is just the expectation of theta hat minus its expectation squared. That came from the fact that when I added and removed the expectation of theta hat in there, the cross-terms cancel. All right. So that was the bias squared, and this is the variance. And so for example, if the quadratic risk goes to 0, then that means that theta hat converges to theta in the L2 sense. And here we know that if we want this to go to 0, since it's the sum of two positive terms, we need to have both the bias that goes to 0 and the variance that goes to 0, so we need to control both of those things. And so there is usually an inherent trade-off between getting a small bias and getting a small variance. If you reduce one too much, then the variance of the other one is going to-- then the other one is going to increase, or the opposite. That happens a lot, but not so much, actually, in this class. So let's just look at a couple of examples. So am I planning-- yeah. So examples. So if I do, for example, X1, Xn, there are iid Bernoulli. And I'm going to write it theta so that we keep the same notation. Then theta hat, what is the theta hat that we proposed many times? It's just X bar, Xn bar, the average of Xi's. So what is the bias of this guy? Well, to know the bias, I just have to remove theta from the expectation. What is the expectation of Xn bar? Well, by linearity of the expectation, it's just the average of the expectations. But since all my Xi's are Bernouilli with the same theta, then each of this guy is actually equal to theta. So this thing is actually theta, which means that this isn't biased, right? Now, what is the variance of this guy? So if you forgot the properties of the variance for sum of independent random variables, now it's time to wake up. So we have the variance of something that looks like 1 over n, the sum from i equal 1 to n of Xi. So it's of the form variance of a constant times a random variable. So the first thing I'm going to do is pull out the constant. But we know that the variance leaves on the square scale, so when I pull out a constant outside of the variance, it comes out with a square. The variance of a times X is a-squared times the variance of X, so this is equal to 1 over n squared times the variance of the sum. So now we want to always do what we want to do. So we have the variance of the sum. We would like somehow to say that this is the sum of the variances. And in general, we are not allowed to say that, but we are because my Xi's are actually independent. So this is actually equal to 1 over n squared sum from i equal 1 to n of the variance of each of the Xi's. And that's by independence, so this is basic probability. And now, what is the variance of Xi's where again they're all the same distribution, so the variance of Xi is the same as the variance of X1. And so each of those guys has variance what? What is the variance of a Bernoulli? We've said it once. It's theta times 1 minus theta. And so now I'm going to have the sum of n times a constant, so I get n times the constant divided by n squared, so one of the n's is going to cancel. And so the whole thing here is actually equal to theta, 1 minus theta divided by n. So if I'm interested in the quadratic risk-- and again, I should just say risk, because this is the only risk we're going to be actually looking at. Yeah. This parenthesis should really stop here. I really wanted to put quadratic in parenthesis. So the risk of this guy is what? Well, it's the expectation of x bar n minus theta squared. And we know it's the square of the variance, so it's the square of the bias, which we know is 0, so it's 0 squared plus the variance, which is theta, 1 plus theta-- 1 minus theta divided by n. So it's just theta, 1 minus theta divided by n. So this is just summarizing the performance of an estimator, which is the random variable. I mean, it's complicated. If I really wanted to describe it, I would just tell you the entire distribution of this random variable. But now what I'm doing is I'm saying, well, let's just take this random variable, remove theta from it, and see how small the fluctuations around theta-- the squared fluctuations around theta are in expectation. So that's what the quadratic risk is doing. And in a way, this decomposition, as the sum of the bias square and the variance, is really telling you that-- it is really accounting for the bias, which is, well, even if I had an infinite amount of observations, is this thing doing the right thing? And the other thing is actually the variance, so for finite number of observations, what are the fluctuations? All right. Then you can see that those things, bias and variance, are actually very different. So I don't have any colors here, so you're going to have to really follow the speed-- the order in which I draw those curves. All right. So let's find-- I'm going to give you three candidate estimators, so-- estimators for theta. So the first one is definitely Xn bar. That will be a good candidate estimator. The second one is going to be 0.5, because after all, why should I bother if it's actually going to be-- right? So for example, if I ask you to predict the score of some candidate in some election, then since you know it's going to be very close to 0.5, you might as well just throw 0.5 and you're not going to be very far from reality. And it's actually going to cost you 0 time and $0 to come up with that. So sometimes maybe just a good old guess is actually doing the job for you. Of course, for presidential elections or something like this, it's not very helpful if your prediction is telling you this. But if it was something different, that would be a good way to generate some close to 1/2. For a coin, for example, if I give you a coin, you never know. Maybe it's slightly biased. But the good guess, just looking at it, inspecting it, maybe there's something crazy happening with the structure of it, you're going to guess that it's 0.5 without trying to collect information. And let's find another one, which is, well, you know, I have a lot of observations. But I'm recording couples kissing, but I'm on a budget. I don't have time to travel all around the world and collect some people. So really, I'm just going to look at the first couple and go home. So my other estimator is just going to be X1. I just take the first observation, 0, 1, and that's it. So now I'm going-- I want to actually understand what the behavior of those guys is. All right. So we know-- and so we know that for this guy, the bias is 0 and the variance is equal to theta, 1 minus theta divided by n. What is the bias of this guy, 0.5? AUDIENCE: 0.5. AUDIENCE: 0.5 minus theta? PHILIPPE RIGOLLET: 0.5 minus theta, right. So the bias, 0.5 minus theta. What is the variance of this guy? What is the variance of 0.5? AUDIENCE: It's 0. PHILIPPE RIGOLLET: 0. Right. It's just a deterministic number, so there's no fluctuations for this guy. What is the bias? Well, X1 is actually-- just for simplicity, I can think of it as being X1 bar, the average of itself, so that wherever I saw an n for this guy, I can replace it by 1 and that will give me my formula. So the bias is still going to be 0. And the variance is going to be equal to theta, 1 minus theta. So now I have those three estimators. Well, if I compare X1 and Xn bar, then clearly I have 0 bias in both cases. That's good. And I have the variance that's actually n times smaller when I use my n observations than when I don't. So those two guys, on these two fronts, you can actually look at the two numbers and say, well, the first number is the same. The second number is better for the other guy, so I will definitely go for this guy compared to this guy. So this guy is gone. But not this guy. Well, if I look at the bias, the variance is 0. It's always beating the variance of this guy. And if I look at the bias, it's actually really not that bad. It's 0.5 minus theta. In particular, if theta is 0.5, then this guy is strictly better. And so you can actually now look at what the quadratic risk looks like. So here, what I'm going to do is I'm going to take my true theta-- so it's going to range between 0 and 1. And we know that those two things are functions of theta, so I can only understand them if I plot them as functions of theta. And so now I'm going to actually plot-- the y-axis is going to be the risk. So what is the risk of the estimator of 0.5? This one is easy. Well, it's 0 plus the square of 0.5 minus theta. So we know that at theta, it's actually going to be 0. And then it's going to be a square. So at 0, it's going to be 0.25. And at 1, it's going to be 0.25 as well. So it looks like this. Well, actually, sorry. Let me put the 0.5 where it should be. OK. So this here is the risk of 0.5. And we'll write it like this. So when theta is very close to 0.5, I'm very happy. When theta gets farther, it's a little bit annoying. And then here, I want to plot the risk of this guy. So now the thing with the risk of this guy is that it will depend on n. So I will just pick some n that I'm happy with just so that I can actually draw a curve. Otherwise, I'm going to have to plot one curve per value of n. So let's just say, for example, that n is equal to 10. And so now I need to plot the function theta, 1 minus theta divided by 10. We know that theta, 1 minus theta is a curve that goes like this. It takes value at 1/2. It thinks value 1/4. That's the maximum. And then it's 0 at the end. So really, if n is equal to 1, this is what the variance looks like. The bias doesn't count in the risk. Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Sure. Can you move? All right. Are you guys good? All right. So now I have this picture. And I know I'm going up to 25. And there's a place where those curves cross. So if you're sure-- let's say you're talking about presidential election, you know that those things are going to be really close. Maybe you're actually better by predicting 0.5 if you know it's not going to go too far. But that's for one observation, so that's the risk of X1. But if I look at the risk of Xn, all I'm doing is just crushing this curve down to 0. So as n increases, it's going to look more and more like this. It's the same curve divided by n. And so now I can just start to understand that for different values of thetas, now I'm going to have to be very close to theta is equal to 1/2 if I want to start saying that Xn bar is worse than the naive estimator 0.5. Yeah. AUDIENCE: Sorry. I know you explained a little bit before, but can you just-- what is an intuitive definition of risk? What is it actually describing? PHILIPPE RIGOLLET: So either you can-- well, when you have an unbiased estimator, it's simple. It's just telling you it's the variance, because the theta that you have over there is really-- so in the definition of the risk, the theta that you have here if you're unbiased is really the expectation of theta hat. So that's really just the variance. So the risk is really telling you how much fluctuations I have around my expectation if unbiased. But actually here, it's telling you how much fluctuations I have in average around theta. So if you understand the notion of variance as being-- AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: What? AUDIENCE: Like variance on average. PHILIPPE RIGOLLET: No. AUDIENCE: No. PHILIPPE RIGOLLET: It's just like variance. AUDIENCE: Oh, OK. PHILIPPE RIGOLLET: So when you-- I mean, if you claim you understand what variance is, it's telling you what is the expected squared fluctuation around the expectation of my random variable. It's just telling you on average how far I'm going to be. And you take the square because you want to cancel the signs. Otherwise, you're going to get 0. AUDIENCE: Oh, OK. PHILIPPE RIGOLLET: And here it's saying, well, I really don't care what the expectation of theta hat is. What I want to get to is theta, so I'm looking at the expectation of the squared fluctuations around theta itself. If I'm unbiased, it coincides with the variance. But if I'm biased, then I have to account for the fact that I'm really not computing the-- AUDIENCE: OK. OK. Thanks. PHILIPPE RIGOLLET: OK? All right. Are there any questions? So here, what I really want to illustrate is that the risk itself is a function of theta most of the times. And so for different thetas, some estimators are going to be better than others. But there's also the entire range of estimators, those that are really biased, but the bias can completely vanish. And so here, you see you have no bias, but the variance can be large. Or you have 0 bias-- you have a bias, but the variance is 0. So you can actually have this trade-off and you can find things that are in the entire range in general. So those things are actually-- those trade-offs between bias and variance are usually much better illustrated if we're talking about multivariate parameters. If I actually look at a parameter which is the mean of some multivariate Gaussian, so an entire vector, then the bias is going to-- I can make the bias bigger by, for example, forcing all the coordinates of my estimator to be the same. So here, I'm going to get some bias, but the variance is actually going to be much better, because I get to average all the coordinates for this guy. And so really, the bias/variance trade-off is when you have multiple parameters to estimate, so you have a vector of parameters, a multivariate parameter, the bias increases when you're trying to pull more information across the different components to actually have a lower variance. So the more you average, the lower the variance. That's exactly what we've illustrated. As n increases, the variance decreases, like 1 over n or theta, 1 minus theta over n. And so this is how it happens in general. In this class, it's mostly one-dimensional parameter estimation, so it's going to be a little harder to illustrate that. But if you do, for example, non-parametric estimation, that's all you do. There's just bias/variance trade-offs all the time. And in between, when you have high-dimensional parametric estimation, that happens a lot as well. OK. So I'm just going to go quickly through those two remaining slides, because we've actually seen them. But I just wanted you to have somewhere a formal definition of what a confidence interval is. And so we fixed a statistical model for n observations, X1 to Xn. The parameter theta here is one-dimensional. Theta is a subset of the real line, and that's why I talk about intervals. An interval is a subset of the line. If I had a subset of R2, for example, that would no longer be called an interval, but a region, just because-- well, that's just we can say a set, a confidence set. But people like to say confidence region. So an interval is just a one-dimensional conference region. And it has to be an interval as well. So a confidence interval of level 1 minus alpha-- so we refer to the quality of a confidence interval is actually called it's level. It takes value 1 minus alpha for some positive alpha. And so the confidence level-- the level of the confidence interval is between 0 and 1. The closer to 1 it is, the better the confidence interval. The closer to 0, the worse it is. And so for any random interval-- so a confidence interval is a random interval. The bounds of this interval depends on random data. Just like we had X bar plus/minus 1 over square root of n, for example, or 2 over square root of n, this X bar was the random thing that would make fluctuate those guys. And so now I have an interval. And now I have its boundaries, but now the boundaries are not allowed to depend on my unknown parameter. Otherwise, it's not a confidence interval, just like an estimator that depends on the unknown parameter is not an estimator. The confidence interval has to be something that I can compute once I collect data. And so what I want is that-- so there's this weird notation. The fact that I write theta-- that's the probability that I contains theta. You're used to seeing theta belongs to I. But here, I really want to emphasize that the randomness is in I. And so the way you actually say it when you read this formula is the probability that I contains theta is at least 1 minus alpha. So it better be close to 1. You want 1 minus alpha to be very close to 1, because it's really telling you that whatever random variable I'm giving you, my error bars are actually covering the right theta. And I want this to be true. But I want this-- since I don't know what my confidence-- my parameter of theta is, I want this to hold true for all possible values of the parameters that nature may have come up with from. So I want this-- so there's theta that changes here, so the distribution of the interval is actually changing with theta hopefully. And theta is changing with this guy. So regardless of the value of theta that I'm getting, I want that the probability that it contains the theta is actually larger than 1 minus alpha. So I'll come back to it in a second. I just want to say that here, we can talk about asymptotic level. And that's typically when you use central limit theorem to compute this guy. Then you're not guaranteed that the value is at least 1 minus alpha for every n, but it's actually in the limit larger than 1 minus alpha. So maybe for each fixed n it's going to be not true. But for as no goes to infinity, it's actually going to become true. If you want this to hold for every n, you actually need to use things such as Hoeffding's inequality that we described at some point, that hold for every n. So as a rule of thumb, if you use the central limit theorem, you're dealing with a confidence interval with asymptotic level 1 minus alpha. And the reason is because you actually want to get the quintiles of the normal-- the Gaussian distribution that comes from the central limit theorem. And if you want to use Hoeffding's, for example, you might actually get away with a confidence interval that's actually true even non-asymptotically. It's just the regular confidence interval. So this is the formal definition. It's a bit of a mouthful. But we actually-- the best way to understand them is to build them. Now, at some point I said-- and I think it was part of the homework-- so here, I really say the probability the true parameter belongs to the confidence interval is actually 1 minus alpha. And so that's because here, this confidence interval is still a random variable. Now, if I start plugging in numbers instead of the random variables X1 to Xn, I start putting 1, 0, 0, 1, 0, 0, 1, like I did for the kiss example, then in this case, the random interval is actually going to be 0.42, 0.65. And this guy, the probability that theta belongs to it is not 1 minus alpha. It's either 0 if it's not in there or it's 1 if it's in there. So here is the example that we had. So just let's look at back into our favorite example, which is the average of Bernoulli random variables, so we studied that maybe that's the third time already. So the sample average, Xn bar, is a strongly consistent estimator of p. That was one of the properties that we wanted. Strongly consistent means that as n goes to infinity, it converges almost surely to the true parameter. That's the strong law of large number. It is consistent also, because it's strongly consistent, so it also converges in probability, which makes it consistent. It's unbiased. We've seen that. We've actually computed its quadratic risk. And now what I have is that if I look at-- thanks to the central limit theorem, we actually did this. We built a confidence interval at level 1 minus alpha-- asymptotic level, sorry, asymptotic level 1 minus alpha. And so here, this is how we did it. Let me just go through it again. So we know from the central limit theorem-- so the central limit theorem tells us that Xn bar minus p divided by square root of p1 minus p, square root of n converges in distribution as n goes to infinity to some standard normal distribution. So what it means is that if I look at the probability under the true p, that's square root of n, Xn bar minus p divided by square root of p1 minus p, it's less than Q alpha over 2, where this is the definition of the quintile. Then this guy-- and I'm actually going to use the same notation, limit as n goes to infinity, this is the same thing. So this is actually going to be equal to 1 minus alpha. That's exactly what I did last time. This is by definition of the quintile of a standard Gaussian and of a limit in distribution. So the probabilities computed on this guy in the limit converges to the probability computed on this guy. And we know that this is just the probability that the absolute value of sum n 0, 1 is less than Q alpha over 2. And so in particular, if it's equal, then I can put some larger than or equal to, which guarantees my asymptotic confidence level. And I just solve for p. So this is equivalent to the limit as n goes to infinity of the probability that theta is between Xn bar minus Q alpha over 2 divided by-- times square root of p1 minus p divided by square root of n, Xn bar plus q alpha over 2, square root of p1 minus p divided by square root of n is larger than or equal to 1 minus alpha. And so there you go. I have my confidence interval. Except that's not, right? We just said that the bounds of a confidence interval may not depend on the unknown parameter. And here, they do. And so we actually came up with two ways of getting rid of this. Since we only need this thing-- so this thing, as we said, is really equal. Every time I'm going to make this guy smaller and this guy larger, I'm only going to increase the probability. And so what we do is we actually just take the largest possible value for p1 minus p, which makes the interval as large as possible. And so now I have this. I just do one of the two tricks. I replace p1 minus p by their upper bound, which is 1/4. As we said, p1 minus p, the function looks like this. So I just take the value here at 1/2. Or, I can use Slutsky and say that if I replace p by Xn bar, that's the same as just replacing p by Xn bar here. And by Slutsky, we know that this is actually converging also to some standard Gaussian. We've seen that when we saw Slutsky as an example. And so those two things-- actually, just because I'm taking the limit and I'm only caring about the asymptotic confidence level, I can actually just plug in consistent quantities in there, such as Xn bar where I don't have a p. And that gives me another confidence interval. All right. So this by now, hopefully after doing it three times, you should really, really be comfortable with just creating this confidence interval. We did it three times in class. I think you probably did it another couple times in your homework. So just make sure you're comfortable with this. All right. That's one of the basic things you would want to know. Are there any questions? Yes. AUDIENCE: So Slutsky holds for any single response set p. But Xn converges [INAUDIBLE]. PHILIPPE RIGOLLET: So that's not Slutsky, right? AUDIENCE: That's [INAUDIBLE]. PHILIPPE RIGOLLET: So Slutsky tells you that if you-- Slutsky's about combining two types of convergence. So Slutsky tells you that if you actually have one Xn that converges to X in distribution and Yn that converges to Y in probability, then you can actually multiply Xn and Yn and get that the limit in distribution is the product of X and Y, where X is now a constant. And here we have the constant, which is 1. But I did that already, right? Using Slutsky to replace it for the-- to replace P by Xn bar, we've done that last time, maybe a couple of times ago, actually. Yeah. AUDIENCE: So I guess these statements are [INAUDIBLE].. PHILIPPE RIGOLLET: That's correct. AUDIENCE: So could we like figure out [INAUDIBLE] can we set a finite [INAUDIBLE]. PHILIPPE RIGOLLET: So of course, the short answer is no. So here's how you would go about thinking about which method is better. So there's always the more conservative method. The first one, the only thing you're losing is the rate of convergence of the central limit theorem. So if n is large enough so that the central limit theorem approximation is very good, then that's all you're going to be losing. Of course, the price you pay is that your confidence interval is wider than it would be if you were to use Slutsky for this particular problem, typically wider. Actually, it is always wider, because Xn bar-- 1 minus Xn bar is always less than 1/4 as well. And so that's the first thing you-- so Slutsky basically adds your relying on the central limit-- your relying on the asymptotics again. Now of course, you don't want to be conservative, because you actually want to squeeze as much from your data as you can. So it depends on how comfortable and how critical it is for you to put valid error bars. If they're valid in the asymptotics, then maybe you're actually going to go with Slutsky so it actually gives you slightly narrower confidence intervals and so you feel like you're a little more-- you have a more precise answer. Now, if you really need to be super-conservative, then you're actually going to go with the P1 minus P. Actually, if you need to be even more conservative, you are going to go with Hoeffding's so you don't even have to rely on the asymptotics level at all. But then you're confidence interval becomes twice as wide and twice as wide and it becomes wider and wider as you go. So depends on-- I mean, there's a lot of data in statistics which is gauging how critical it is for you to output valid error bounds or if they're really just here to be indicative of the precision of the estimator you gave from a more qualitative perspective. AUDIENCE: So the error there is [INAUDIBLE]?? PHILIPPE RIGOLLET: Yeah. So here, there's basically a bunch of errors. There's one that's-- so there's a theorem called Berry-Esseen that quantifies how far this probability is from 1 minus alpha, but the constants are terrible. So it's not very helpful, but it tells you as n grows how smaller this thing grows-- becomes smaller. And then for Slutsky, again you're multiplying something that converges by something that fluctuates around 1, so you need to understand how this thing fluctuates. Now, there's something that shows up. Basically, what is the slope of the function 1 over square root of X1 minus X around the value you're interested in? And so if this function is super-sharp, then small fluctuations of Xn bar around this expectation are going to lead to really high fluctuations of the function itself. So if you're looking at-- if you have f of Xn bar and f around say the true P, if f is really sharp like that, then if you move a little bit here, then you're going to move really a lot on the y-axis. So that's what the function here-- the function you're interested in is 1 over square root of X1 minus X. So what does this function look like around the point where you think P is the true parameter? Its derivative really is what matters. OK? Any other question. OK. So it's important, because now we're going to switch to the real let's do some hardcore computation type of things. All right. So in this chapter, we're going to talk about maximum likelihood estimation. Who has already seen maximum likelihood estimation? OK. And who knows what a convex function is? OK. So we'll do a little bit of reminders on those things. So those things are when we do maximum likelihood estimation, likelihood is the function, so we need to maximize a function. That's basically what we need to do. And if I give you a function, you need to know how to maximize this function. Sometimes, you have closed-form solutions. You can take the derivative and set it equal to 0 and solve it. But sometimes, you actually need to resort to algorithms to do that. And there's an entire industry doing that. And we'll briefly touch upon it, but this is definitely not the focus of this class. OK. So before diving directly into the definition of the likelihood and what is the definition of the maximum likelihood estimator, what I'm going to try to do is to give you an insight for what we're actually doing when we do maximum likelihood estimation. So remember, we have a model on a sample space E and some candidate distributions P theta. And really, your goal is to estimate a true theta star, the one that generated some data, X1 to Xn, in an iid fashion. But this theta star is really a proxy for us to know that we actually understand the distribution itself. The goal of knowing theta star is so that you can actually know what P theta star. Otherwise, it has-- well, sometimes we said it has some meaning itself, but really you want to know what the distribution is. And so your goal is to actually come up with the distribution-- hopefully that comes from the family P theta-- that's close to P theta star. So in a way, what does it mean to have two distributions that are close? It means that when you compute probabilities on one distribution, you should have the same probability on the other distribution pretty much. So what we can do is say, well, now I have two candidate distributions. So if theta hat leads to a candidate distribution P theta hat, and this is the true theta star, it leads to the true distribution P theta star according to which my data was drawn. That's my candidate. As a statistician, I'm supposed to come up with a good candidate, and this is the truth. And what I want is that if you actually give me the distribution, then I want when I'm computing probabilities for this guy, I know what the probabilities for the other guys are. And so really what I want is that if I compute a probability under theta hat of some interval a, b, it should be pretty close to the probability under theta star of a, b. And more generally, if I want to take the union of two intervals, I want this to be true. If I take just 1/2 lines, I want this to be true from 0 to infinity, for example, things like this. I want this to be true for all of them at once. And so what I do is that I write A for a probability event. And I want that P hat of A is close to P star of A for any event A in the sample space. Does that sound like a reasonable goal for a statistician? So in particular, if I want those to be close, I want the absolute value of their difference to be close to 0. And this turns out to be-- if I want this to hold for all possible A's, I have all possible events, so I'm going to actually maximize over these events. And I'm going to look at the worst possible event on which theta hat can depart from theta star. And so rather than defining it specifically for theta hat and theta star, I'm just going to say, well, if you give me two probability measures, P theta and P theta prime, I want to know how close they are. Well, if I want to measure how close they are by how they can differ when I measure the probability of some event, I'm just looking at the absolute value of the difference of the probabilities and I'm just maximizing over the worst possible event that might actually make them differ. Agreed? That's a pretty strong notion. So if the total variation between theta and theta prime is small, it means that for all possible A's that you give me, then P theta of A is going to be close to P theta prime of A, because if-- let's say I just found the bound on the total variation distance, which is 0.01. All right. So that means that this is going to be larger than the max over A of P theta minus P theta prime of A, which means that for any A-- actually, let me write P theta hat and P theta star, like we said, theta hat and theta star. And so if I have a bound, say, on the total variation, which is 0.01, that means that P theta hat-- every time I compute a probability on P theta hat, it's basically in the interval P theta star of A, the one that I really wanted to compute, plus or minus 0.01. This has nothing to do with confidence interval. This is just telling me how far I am from the value of actually trying to compute. And that's true for all A. And that's key. That's where this max comes into play. It just says, I want this bound to hold for all possible A's at once. So this is actually a very well-known distance between probability measures. It's the total variation distance. It's extremely central to probabilistic analysis. And it essentially tells you that every time-- if two probability distributions are close, then it means that every time I compute a probability under P theta but I really actually have data from P theta prime, then the error is no larger than the total variation. OK. So this is maybe not the most convenient way of finding a distance. I mean, how are you going-- in reality, how are you to compute this maximum over all possible events? I mean, it's just crazy, right? There's an infinite number of them. It's much larger than the number of intervals, for example, so it's a bit annoying. And so there's actually a way to compress it by just looking at the basically function distance or vector distance between probability mass functions or probability density functions. So I'm going to start with the discrete version of the total variation. So throughout this chapter, I will make the difference between discrete random variables and continuous random variables. It really doesn't matter. All it means is that when I talk about discrete, I will talk about probability mass functions. And when I talk about continuous, I will talk about probability density functions. When I talk about probability mass functions, I talk about sums. When I talk about probability density functions, I talk about integrals. But they're all the same thing, really. So let's start with the probability mass function. Everybody remembers what the probability mass function of a discrete random variable is. This is the function that tells me for each possible value that it can take, the probability that it takes this value. So the Probability Mass Function, PMF, is just the function for all x in the sample space tells me the probability that my random variable is equal to this little value. And I will denote it by P sub theta of X. So what I want is, of course, that the sum of the probabilities is 1. And I want them to be non-negative. Actually, typically we will assume that they are positive. Otherwise, we can just remove this x from the sample space. And so then I have the total variation distance, I mean, it's supposed to be the maximum overall sets of-- of subsets of E, such that the probability of A minus probability of theta prime of A-- it's complicated, but really there's this beautiful formula that tells me that if I look at the total variation between P theta and P theta prime, it's actually equal to just 1/2 of the sum for all X in E of the absolute difference between P theta X and P theta prime of X. So that's something you can compute. If I give you two probability mass functions, you can compute this immediately. But if I give you just the densities and the original distribution, the original definition where you have to max over all possible events, it's not clear you're going to be able to do that very quickly. So this is really the one you can work with. But the other one is really telling you what it is doing for you. It's controlling the difference of probabilities you can compute on any event. But here, it's just telling you, well, if you do it for each simple event, it's little x. It's actually simple. Now, if we have continuous random variables-- so by the way, I didn't mention, but discrete means Bernoulli. Binomial, but not only those that have finite support, like Bernoulli has support of size 2, binomial NP has support of size n-- there's n possible values it can take-- but also Poisson. Poisson distribution can take an infinite number of values, all the positive integers, non-negative integers. And so now we have also the continuous ones, such as Gaussian, exponential. And what characterizes those guys is that they have a probability density. So the density, remember the way I use my density is when I want to compute the probability of belonging to some event A. The probability of X falling to some subset of the real line A is simply the integral of the density on this set. That's the famous area under the curve thing. So since for each possible value, the probability at X-- so I hope you remember that stuff. That's just probably something that you must remember from probability. But essentially, we know that the probability that X is equal to little x is 0 for a continuous random variable, for all possible X's. There's just none of them that actually gets weight. So what we have to do is to describe the fact that it's in some little region. So the probability that it's in some interval, say, a, b, this is the integral between A and B of f theta of X, dx. So I have this density, such as the Gaussian one. And the probability that I belong to the interval a, b is just the area under the curve between A and B. If you don't remember that, please take immediate remedy. So this function f, just like P, is non-negative. And rather than summing to 1, it integrates to 1 when I integrate it over the entire sample space E. And now the total variation, well, it takes basically the same form. I said that you essentially replace sums by integrals when you're dealing with densities. And here, it's just saying, rather than having 1/2 of the sum of the absolute values, you have 1/2 of the integral of the absolute value of the difference. Again, if I give you two densities and if you're not too bad at calculus, which you will often be, because there's lots of them you can actually not compute. But if I gave you, for example, two Gaussian densities, exponential minus x squared, blah, blah, blah, and I say, just compute the total variation distance, you could actually write it as an integral. Now, whether you can actually reduce this integral to some particular number is another story. But you could technically do it. So now, you have actually a handle on this thing and you could technically ask Mathematica, whereas asking Mathematica to take the max over all possible events is going to be difficult. All right. So the total variation has some properties. So let's keep on the board the definition that involves, say, the densities. So think Gaussian in your mind. And you have two Gaussians, one with mean theta and one with mean theta prime. And I'm looking at the total variation between those two guys. So if I look at P theta minus-- sorry. TV between P theta and P theta prime, this is equal to 1/2 of the integral between f theta, f theta prime. And when I don't write it-- so I don't write the X, dx but it's there. And then I integrate over E. So what is this thing doing for me? It's just saying, well, if I have-- so think of two Gaussians. For example, I have one that's here and one that's here. So this is let's say f theta, f theta prime. This guy is doing what? It's computing the absolute value of the difference between f and f theta prime. You can check for yourself that graphically, this I can represent as an area not under the curve, but between the curves. So this is this guy. Now, this guy is really the integral of the absolute value. So this thing here, this area, this is 2 times the total variation. The scaling 1/2 really doesn't matter. It's just if I want to have an actual correspondence between the maximum and the other guy, I have to do this. So this is what it looks like. So we have this definition. And so we have a couple of properties that come into this. The first one is that it's symmetric. TV of P theta and P theta prime is the same as the TV between P theta prime and P theta. Well, that's pretty obvious from this definition. I just flip those two, I get the same number. It's actually also true if I take the maximum. Those things are completely symmetric in theta and theta prime. You can just flip them. It's non-negative. Is that clear to everyone that this thing is non-negative? I integrate an absolute value, so this thing is going to give me some non-negative number. And so if I integrate this non-negative number, it's going to be a non-negative number. The fact also that it's an area tells me that it's going to be non-negative. The nice thing is that if TV is equal to zero, then the two distributions, the two probabilities are the same. That means that for every A, P theta of A is equal to P theta prime of A. Now, there's two ways to see that. The first one is to say that if this integral is equal to 0, that means that for almost all X, f theta is equal to f theta prime. The only way I can integrate a non-negative and get 0 is that it's 0 pretty much everywhere. And so what it means is that the two densities have to be the same pretty much everywhere, which means that the distributions are the same. But this is not really the way you want to do this, because you have to understand what pretty much everywhere means-- which I should really say almost everywhere. That's the formal way of saying it. But let's go to this definition-- which is gone. Yeah. That's the one here. The max of those two guys, if this maximum is equal to 0-- I have a maximum of non-negative numbers, their absolute values. Their maximum is equal to 0, well, they better be all equal to 0, because if one is not equal to 0, then the maximum is not equal to 0. So those two guys, for those two things to be-- for the maximum to be equal to 0, then each of the individual absolute values have to be equal to 0, which means that the probability here is equal to this probability here for every event A. So those two things-- this is nice, right? That's called definiteness. The total variation equal to 0 implies that P theta is equal to P theta prime. So that's really some notion of distance, right? That's what we want. If this thing being small implied that P theta could be all over the place compared to P theta prime, that would not help very much. Now, there's also the triangle inequality that follows immediately from the triangle inequality inside this guy. If I squeeze in some f theta prime prime in there, I'm going to use the triangle inequality and get the triangle inequality for the whole thing. Yeah? AUDIENCE: The fact that you need two definitions of the [INAUDIBLE],, is it something obvious or is it complete? PHILIPPE RIGOLLET: I'll do it for you now. So let's just prove that those two things are actually giving me the same definition. So what I'm going to do is I'm actually going to start with the second one. And I'm going to write-- I'm going to start with the density version. But as an exercise, you can do it for the PMF version if you prefer. So I'm going to start with the fact that f-- so I'm going to write f of g so I don't have to write f and g. So think of this as being f sub theta, and think of this guy as being f sub theta prime. I just don't want to have to write indices all the time. So I'm going to start with this thing, the integral of f of X minus g of X dx. The first thing I'm going to do is this is an absolute value, so either the number in the absolute value is positive and I actually kept it like that, or it's negative and I flipped its sign. So let's just split between those two cases. So this thing is equal to 1/2 the integral of-- so let me actually write the set A star as being the set of X's such that f of X is larger than g of X. So that's the set on which the difference is going to be positive or the difference is going to be negative. So this, again, is equivalent to f of X minus g of X is positive. OK. Everybody agrees? So this is the set I'm interested in. So now I'm going to split my integral into two parts, in A, A star, so on A star, f is larger than g, so the absolute value is just the difference itself. So here I put parenthesis rather than absolute value. And then I have plus 1/2 of the integral on the complement. What are you guys used to to write the complement, to the C or the bar? To the C? And so here on the complement, then f is less than g, so this is actually really g of X minus f of X, dx. Everybody's with me here? So I just said-- I mean, those are just rewriting what the definition of the absolute value is. OK. So now there's nice things that I know about f and g. And the two nice things is that the integral of f is equal to 1 and the integral of g is equal to 1. This implies that the integral of f minus g is equal to what? AUDIENCE: 0. PHILIPPE RIGOLLET: 0. And so now that means that if I want to just go from the integral here on A complement to the integral on A-- or on A star, complement to the integral of A star, I just have to flip the sign. So that implies that an integral on A star complement of g of X minus f of X, dx, this is simply equal to the integral on A star of f of X minus g of X, dx. All right. So now this guy becomes this guy over there. So I have 1/2 of this plus 1/2 of the same guy, so that means that 1/2 half of the integral between of f minus g absolute value-- so that was my original definition, this thing is actually equal to the integral on A star of f of X minus g of X, dx. And this is simply equal to P of A star-- so say Pf of A start minus Pg of A star. Which one is larger than the other one? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: It is. Just look at this board. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: What? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: The first one has to be larger, because this thing is actually equal to a non-negative number. So now I have this absolute value of two things, and so I'm closer to the actual definition. But I still need to show you that this thing is the maximum value. So this is definitely at most the maximum over A of Pf of A minus Pg of A. That's certainly true. Right? We agree with this? Because this is just for one specific A, and I'm bounding it by the maximum over all possible A. So that's clearly true. So now I have to go the other way around. I have to show you that the max is actually this guy, A star. So why would that be true? Well, let's just inspect this thing over there. So we want to show that if I take any other A in this integral than this guy A star, it's actually got to decrease its value. So we have this function. I'm going to call this function delta. And what we have is-- so let's say this function looks like this. Now it's the difference between two densities. It doesn't have to integrate-- it doesn't have to be non-negative. But it certainly has to integrate to 0. And so now I take this thing. And the A star, what is the set A star here? The set A star is the set over which the function delta is non-negative. So that's just the definition. A star was the set over which f minus g was positive, and f minus g was just called delta. So what it means is that what I'm really integrating is delta on this set. So it's this area under the curve, just on the positive things. Agreed? So now let's just make some tiny variations around this guy. If I take A to be larger than A star-- so let me add, for example, this part here. That means that when I compute my integral, I'm removing this area under the curve. It's negative. The integral here is negative. So if I start adding something to A, the value goes lower. If I start removing something from A, like say this guy, I'm actually removing this value from the integral. So there's no way. I'm actually stuck. This A star is the one that actually maximizes the integral of this function. So we used the fact that for any function, say delta, the integral over A of delta is less than the integral over the set of X's such that delta of X is non-negative of delta of X, dx. And that's an obvious fact, just by picture, say. And that's true for all A. Yeah? AUDIENCE: [INAUDIBLE] could you use like a portion under the axis as like less than or equal to the portion above the axis? PHILIPPE RIGOLLET: It's actually equal. We know that the integral of f minus g-- the integral of delta is 0. So there's actually exactly the same area above and below. But yeah, you're right. You could go to the extreme cases. You're right. No. It's actually still be true, even if there was-- if this was a constant, that would still be true. Here, I never use the fact that the integral is equal to 0. I could shift this function by 1 so that the integral of delta is equal to 1, and it would still be true that it's maximized when I take A to be the set where it's positive. Just need to make sure that there is someplace where it is, but that's about it. Of course, we used this before, when we made this thing. But just the last argument, this last fact does not require that. All right. So now we have this notion of-- I need the-- OK. So we have this notion of distance between probability measures. I mean, these things are exactly what-- if I were to be in a formal math class and I said, here are the axioms that a distance should satisfy, those are exactly those things. If it's not satisfying this thing, it's called pseudo-distance or quasi-distance or just metric or nothing at all, honestly. So it's a distance. It's symmetric, non-negative, equal to 0, if and only if the two arguments are equal, then it satisfies the triangle inequality. And so that means that we have this actual total variation distance between probability distributions. And here is now a statistical strategy to implement our goal. Remember, our goal was to spit out a theta hat, which was close such that P theta hat was close to P theta star. So hopefully, we were trying to minimize the total variation distance between P theta hat and P theta star. Now, we cannot do that, because just by this fact, this slide, if we wanted to do that directly, we would just take-- well, let's take theta hat equals theta star and that will give me the value 0. And that's the minimum possible value we can take. The problem is that we don't know what the total variation is to something that we don't know. We know how to compute total variations if I give you the two arguments. But here, one of the arguments is not known. P theta star is not known to us, so we need to estimate it. And so here is the strategy. Just build an estimator of the total variation distance between P theta and P theta star for all candidate theta, all possible theta in capital theta. Now, if this is a good estimate, then when I minimize it, I should get something that's close to P theta star. So here's the strategy. This is my function that maps theta to the total variation between P theta and P theta star. I know it's minimized at theta star. That's definitely TV of P-- and the value here, the y-axis should say 0. And so I don't know this guy, so I'm going to estimate it by some estimator that comes from my data. Hopefully, the more data I have, the better this estimator is. And I'm going to try to minimize this estimator now. And if the two things are close, then the minima should be close. That's a pretty good estimation strategy. The problem is that it's very unclear how you would build this estimator of TV, of the Total Variation. So building estimators, as I said, typically consists in replacing expectations by averages. But there's no simple way of expressing the total variation distance as the expectations with respect to theta star of anything. So what we're going to do is we're going to move from total variation distance to another notion of distance that sort of has the same properties and the same feeling and the same motivations as the total variation distance. But for this guy, we will be able to build an estimate for it, because it's actually going to be of the form expectation of something. And we're going to be able to replace the expectation by an average and then minimize this average. So this surrogate for total variation distance is actually called the Kullback-Leibler divergence. And why we call it divergence is because it's actually not a distance. It's not going to be symmetric to start with. So this Kullback-Leibler or even KL divergence-- I will just refer to it as KL-- is actually just more convenient. But it has some roots coming from information theory, which I will not delve into. But if any of you is actually a Core 6 student, I'm sure you've seen that in some-- I don't know-- course that has any content on information theory. All right. So the KL divergence between two probability measures, P theta and P theta prime-- and here, as I said, it's not going to be the symmetric, so it's very important for you to specify which order you say it is, between P theta and P theta prime. It's different from saying between P theta prime and P theta. And so we denote it by KL. And so remember, before we had either the sum or the integral of 1/2 of the distance-- absolute value of the distance between the PMFs and 1/2 of the absolute values of the distances between the probability density functions. And then we replace this absolute value of the distance divided by 2 by this weird function. This function is P theta, log P theta, divided by P theta prime. That's the function. That's a weird function. OK. So this was what we had. That's the TV. And the KL, if I use the same notation, f and g, is integral of f of X, log of f of X over g of X, dx. It's a bit different. And I go from discrete to continuous using an integral. Everybody can read this. Everybody's fine with this. Is there any uncertainty about the actual definition here? So here I go straight to the definition, which is just plugging the functions into some integral and compute. So I don't bother with maxima or anything. I mean, there is something like that, but it's certainly not as natural as the total variation. Yes? AUDIENCE: The total variation, [INAUDIBLE].. PHILIPPE RIGOLLET: Yes, just because it's hard to build anything from total variation, because I don't know it. So it's very difficult. But if you can actually-- and even computing it between two Gaussians, just try it for yourself. And please stop doing it after at most six minutes, because you won't be able to do it. And so it's just very hard to manipulate, like this integral of absolute values of differences between probability density function, at least for the probability density functions we're used to manipulate is actually a nightmare. And so people prefer KL, because for the Gaussian, this is going to be theta minus theta prime squared. And then we're going to be happy. And so those things are much easier to manipulate. But it's really-- the total variation is telling you how far in the worst case the two probabilities can be. This is really the intrinsic notion of closeness between probabilities. So that's really the one-- if we could, that's the one we would go after. Sometimes people will compute them numerically, so that they can say, oh, here's the total variation distance I have between those two things. And then you actually know that that means they are close, because the absolute value-- if I tell you total variation is 0.01, like we did here, it has a very specific meaning. If I tell you the KL divergence is 0.01, it's not clear what it means. OK. So what are the properties? The KL divergence between P theta and P theta prime is different from the KL divergence between P theta prime and P theta in general. Of course, in general, because if theta is equal to theta prime, then this certainly is true. So there's cases when it's not true. The KL divergence is non-negative. Who knows the Jensen's inequality here? That should be a subset of the people who raised their hand when I asked what a convex function is. All right. So you know what Jensen's inequality is. This is Jensen's-- the proof is just one step Jensen's inequality, which we will not go into details. But that's basically an inequality involving expectation of a convex function of a random variable compared to the convex function of the expectation of a random variable. If you know Jensen, have fun and prove it. What's really nice is that if the KL is equal to 0, then the two distributions are the same. And that's something we're looking for. Everything else we're happy to throw out. And actually, if you pay attention, we're actually really throwing out everything else. So they're not symmetric. It does satisfy the triangle inequality in general. But it's non-negative and it's 0 if and only if the two distributions are the same. And that's all we care about. And that's what we call a divergence rather than a distance, and divergence will be enough for our purposes. And actually, this asymmetry, the fact that it's not flipping-- the first time I saw it, I was just annoyed. I was like, can we just like, I don't know, take the average of the KL between P theta and P theta prime and P theta prime and P theta, you would think maybe you could do this. You just symmatrize it by just taking the average of the two possible values it can take. The problem is that this will still not satisfy the triangle inequality. And there's no way basically to turn it into something that is a distance. But the divergence is doing a pretty good thing for us. And this is what will allow us to estimate it and basically overcome what we could not do with the total variation. So the first thing that you want to notice is the total variation distance-- the KL divergence, sorry, is actually an expectation of something. Look at what it is here. It's the integral of some function against a density. That's exactly the definition of an expectation, right? So this is the expectation of this particular function with respect to this density f. So in particular, if I call this is density f-- if I say, I want the true distribution to be the first argument, this is an expectation with respect to the true distribution from which my data is actually drawn of the log of this ratio. So ha ha. I'm a statistician. Now I have an expectation. I can replace it by an average, because I have data from this distribution. And I could actually replace the expectation by an average and try to minimize here. The problem is that-- actually the star here should be in front of the theta, not of the P, right? That's P theta star, not P star theta. But here, I still cannot compute it, because I have this P theta star that shows up. I don't know what it is. And that's now where the log plays a role. If you actually pay attention, I said you can use Jensen to prove all this stuff. You could actually replace the log by any concave function. That would be f divergent. That's called an f divergence. But the log itself is a very, very specific property, which allows us to say that the log of the ratio is the ratio of the log. Now, this thing here does not depend on theta. If I think of this KL divergence as a function of theta, then the first part is actually a constant. If I change theta, this thing is never going to change. It depends only on theta star. So if I look at this function KL-- so if I look at the function, theta maps to KL P theta star, P theta, it's of the form expectation with respect to theta star, log of P theta star of X. And then I have minus expectation with respect to theta star of log of P theta of x. Now as I said, this thing here, this second expectation is a function of theta. When theta changes, this thing is going to change. And that's a good thing. We want something that reflects how close theta and theta star are. But this thing is not going to change. This is a fixed value. Actually, it's the negative entropy of P theta star. And if you've heard of KL, you've probably heard of entropy. And that's what-- it's basically minus the entropy. And that's a quantity that just depends on theta star. But it's just the number. I could compute this number if I told you this is n theta star 1. You could compute this. So now I'm going to try to minimize the estimate of this function. And minimizing a function or a function plus a constant is the same thing. I'm just shifting the function here or here, but it's the same minimizer. OK. So the function that maps theta to KL of P theta star to P theta is of the form constant minus this expectation of a log of P theta. Everybody agrees? Are there any questions about this? Are there any remarks, including I have no idea what's happening right now? OK. We're good? Yeah. AUDIENCE: So when you're actually employing this method, how do you know which theta to use as theta star and which isn't? PHILIPPE RIGOLLET: So this is not a method just yet, right? I'm just describing to you what the KL divergence between two distributions is. If you really wanted to compute it, you would need to know what P theta star is and what P theta is. AUDIENCE: Right. PHILIPPE RIGOLLET: And so here, I'm just saying at some point, we still-- so here, you see-- so now let's move onto one step. I don't know expectation of theta star. But I have data that comes from distribution P theta star. So the expectation by the law of large numbers should be close to the average. And so what I'm doing is I'm replacing any-- I can actually-- this is a very standard estimation method. You write something as an expectation with respect to the data-generating process of some function. And then you replace this by the average of this function. And the law of large numbers tells me that those two quantities should actually be close. Now, it doesn't mean that's going to be the end of the day, right. When we did Xn bar, that was the end of the day. We had an expectation. We replaced it by an average. And then we were gone. But here, we still have to do something, because this is not telling me what theta is. Now I still have to minimize this average. So this is now my candidate estimator for KL, KL hat. And that's the one where I said, well, it's going to be of the form of constant. And this constant, I don't know. You're right. I have no idea what this constant is. It depends on P theta star. But then I have minus something that I can completely compute. If you give me data and theta, I can compute this entire thing. And now what I claim is that the minimizer of f or f plus-- f of X or f of X plus 4 are the same thing, or say 4 plus f of X. I'm just shifting the plot of my function up and down, but the minimizer stays exactly where it is. If I have a function-- so now I have a function of theta. This is KL hat of P theta star, P theta. And it's of the form-- it's a function like this. I don't know where this function is. It might very well be this function or this function. Every time it's a translation on the y-axis of all these guys. And the value that I translated by depends on theta star. I don't know what it is. But what I claim is that the minimizer is always this guy, regardless of what the value is. OK? So when I say constant, it's a constant with respect to theta. It's an unknown constant. But it's with respect to theta, so without loss of generality, I can assume that this constant is 0 for my purposes, or 25 if you prefer. All right. So we'll just keep going on this property next time. And we'll see how from here we can move on to-- the likelihood is actually going to come out of this formula. Thanks.
MIT_18650_Statistics_for_Applications_Fall_2016
3_Parametric_Inference.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: Doesn't to run Flash Player. So I had to run them on Chrome. All right, so let's move on to our second chapter. And hopefully, in this chapter, you will feel a little better if you felt like it was going a bit fast in the first chapter. And the main reason we actually went fast, especially in terms of confidence intervals. Some of you came and asked me what you mean by this is a confidence interval? What does it mean that it's not happening in there with probability 95%, et cetera? I just went really fast so that you could see why I didn't want to give you a first week doing probability only without understanding what the statistical context for it was. So hopefully, all these things that we've done in terms of probability, you actually know why we've been doing them. And so we're basically going to go back to what we're doing, maybe start with some statistical setup. But the goal of this lecture is really going to go back again to what we've seen from a purely statistical perspective. All right? So the first thing we're going to do is explain why we're doing statistical modeling, right? So in practice, if you have data, if you observe a bunch of points-- in here, I gave you some numbers, for example. So here's the partial data sets with the number of siblings, including self, that were collected from college students a few years back. So I was teaching a class like yours and actually asked students to go and fill out some Google form and tell me a bunch of things. And one of the questions was, including yourself, how many siblings do you have? And so they gave me this list of numbers, right? And there's many ways I can think of this list of numbers, right? I could think of it as being just a discrete distribution on the set of numbers between 1-- I know there's not going to be an answer which is less than 1, unless, well, someone doesn't understand the question. But all the answers I should get are positive integers-- 1, 2, 3, et cetera. And there probably is an upper bound, but I don't know it on the top of my head. So maybe I should say 100. Maybe I should say 15. It depends, right? And so I think the largest number I got for this was 6. All right? So here you can see you have pretty standard families, you know, lots of 1s, 2s, and 3s. What statistical modeling is doing is to try to compress this information that I could actually describe in a very naive way. So let's start with the basic usual statistical set up, right? So I will start with many of the boards that look like x1, xn, random variables. And what I'm going to assume, as we said typically is that those guys are IID. And they have some distribution, all right? So they all share the same distribution. And the fact that their IID is so that I can actually do statistics. Statistics means looking at the global averaging thing so that I can actually get a sense of what the global behavior is for the population, right? If I start assuming that those things are not identically distributed-- they all live on their own-- that my sequence of number is your number of siblings-- the shoe size of this person-- the depth of the Charles River, and I start measuring a bunch of stuff. There's nothing I can actually get together. I need to have something that's cohesive. And so here, I collected some data that was cohesive. And so the goal here-- the first thing is to say what is the distribution that I actually have here, right? So I could actually be very general. I could just say at some distribution p. And let's so those are random variables, not random vectors, right? I could collect entire vectors about students, but let's say those are just random variables. And so now I can start making assumptions on this distribution p, right? What can I say about a distribution? Well, maybe if those numbers are continues, for example, I could assume they have a density-- a probability density function. That's already an assumption. Maybe I could start to assume that they're probability density function is smooth. That's another assumption. Maybe I could actually assume that it's piecewise constant. That's even better, right? And those things make my life simpler and simpler, because what I do by making the successive assumptions is reducing the degrees of freedom of the space in which I am actually searching the distribution. And so what we actually want is to have something which is small enough so we can actually have some averaging going on. But we also want something which is big enough that it can actually express. It has chances of actually containing a distribution that makes sense for us. So let's start with the simplest possible example, which is when the xi's belong to 0, 1. And as I said, here, we don't have a choice. The distribution of those guys has to be Bernoulli. And since they are IID, they all share the same p. So that's definitely the simplest possible thing I could think of. They are just Bernoulli p. And so all I would have to figure out in this case is p. And this is the simplest case. And unsurprisingly, it has the simplest answer, right? We will come back to this example when we study maximum likelihood estimators or method of moments estimators by method of moments. But at the end of the day, the things that we did-- the things that we will do are always the naive estimator you would come up with is what is the proportion of 1. And this will be, in pretty much all respects, the best estimator you can think of. All right? So then we're going to try to assess this performance. And we saw how to do that in the first chapter as well. So this problem here somehow is completely understood. We'll come back to it. But there's nothing fancy that is going to happen. But now, I could have some more complicated things. For example, in the example of the students now, my xi's belong to the sequence of integers 1, 2, 3, et cetera, OK, which is also denoted by n, maybe without 0 if you want to put 0 in that, right? So the positive integers. Or I could actually just maybe put some prior knowledge about how humans have time to have families. But maybe some people thought of their college mates as being their brothers and sisters. And one student would actually put 465 siblings, because we're all good friends. Or maybe they actually think that all their Facebook contacts are actually their siblings. And so you never know what's going to happen. So maybe you want to account for this, but maybe you know that people are reasonable, and they will actually give you something like this. Now intuitively, maybe you would say, well, why would you bother doing this if you're not really sure about the 20? But I think that probably all of you intuitively guess that this is probably a good idea to start putting this kind of assumption rather than allowing for any number in the first place, because this eventually will be injected into the precision of our estimators. If I allow anything, it's going to be more complicated for me to get an accurate estimator. If I know that the numbers are either 1 or 2, then I'm actually going to be slightly more accurate as well. All right? Because I know that, for example, somebody put the 5, I can remove it. Then it's not going to actually corrupt with my estimator. All right, so now, let's say we actually agree that we have numbers. And here I put seven numbers, OK? So I just said, well, let's assume that the numbers I'm going to get are going to be 1 all the way to say this number that I denote by larger than or equal to 7, which is a placeholder for any numbers larger than 7, OK? Because I know maybe I don't want to distinguish between people that have 9 or 25 siblings. OK, and so now, this is a distribution on seven possible values-- the discrete distributions. And you know from your probability class that the way you describe this distribution is using the probability mass function. OK, or PMF-- So that's how we describe a discrete distribution. And the PMF is just a list of numbers, right? So as I wrote here, you have a list of numbers. And here, you wrote the possible value that your random variable can take. And here you rate the probability that your random variable takes this value. So the possible values being 1, 2, 3 all the way to larger than or equal to 7. And then I'm trying to estimate those numbers. Right? If I give you those numbers, at least up to this you know compression of all numbers that are equal to 7, you have the full description of your distribution. And that is the ultimate goal of statistics, right? The ultimate goal of statistics is to say what distribution your data came from, because that's basically the best you're going to be able to do. Now admittedly, if I started looking at the fraction of 1s, and the fraction of 2s, and the fraction of 3s, et cetera, I would actually eventually get those numbers-- just like looking at the fraction of 1s gave me a good estimate for p in the Bernoulli case, it would do the same in this case, right? It's a pretty intuitive idea. It's just the law of large numbers. Everybody agrees with that? If I look at the proportion of 1s, the proportion of 2s, the proportion of 3s, that should actually give me something that gets closer and closer, as my sample size increases to what I want. But the problem is if my sample size is not huge, here I have seven numbers to estimate. And if I have 20 observations, the ratio is not really in my favor-- 20 observations to estimate seven parameters-- some of them are going to be pretty off, typically the ones with the large values. If you have only 20 students, look at the list of numbers. I don't know how many numbers I have, but it probably is close to 20-- maybe 15 or something. And so if you look at this list, nobody's actually-- nobody has four or more siblings, right? There's no such person. So that means that eventually from this data set, my estimates-- so those numbers I denote by say p1, p2, p3, et cetera-- those estimates p4 hat would be equal to what from this data? 0, right? And p5 hat equal to 0 and p6 hat would be equal to 0. And p larger than or equal to 7 hat would be equal to 0. That would be my estimate from this data set. So maybe this is not-- maybe I want to actually pull some information from the people who have less siblings to try to make a guess, which is probably slightly better for the larger values, right? It's pretty clear that in average, there is more than 0-- the proportion of the population of households that have four children or more is definitely more than 0, all right? So it means that my data set is not representative of what I'm going to try to do is to find a model that tries to use the data they have for the smaller values that I can observe and just push it up to the other ones. And so what we can do is to just reduce those parameters into something that's understood. And this is part of the modeling that I talked about in the first place. Now, how do you succinctly describe a number of something? Well, one thing that you do is the Poisson distribution, right? Why do Poisson? There's many reasons. Again, that's part of statical modeling. But once you know that you have number of something that can be modeled by a Poisson, why not try a Poisson, right? You could just fit a Poisson. And the Poisson is something that looks like this. And I guess you've all seen it. But if x follows a Poisson distribution with parameter lambda, than the probability that x is equal to little x is equal to lambda to the x over factorial x e to the minus lambda. OK? And if you did the sheet that I gave you on the first day, you can check those numbers. So this is, of course, for x equals 0, 1, et cetera, right? So x is in natural integers. And if you sum from x equals 0 to infinity, this thing you get is e to the lambda. And so they cancel, and you have some which is equal to 1, which is indeed a PMF. But what's key about this PMF is that it never takes value 0. Like this thing is always strictly positive. So whatever value of lambda I find from this data will give me something that's certainly more interesting than just putting the value 0. But more importantly, rather than having to estimate seven parameters and, as a consequence, to actually have to estimate 1, 2, 3, 4 of them being equal to 0, I have only one parameter to estimate which is lambda. The problem with doing this is that now lambda may not be just something as simple as computing the average number. Right? In this case, it will. But in many instances, it's actually not clear that this parametrization with lambda that I chose-- I'm going to be able to estimate lambda just by computing the average number that I get. It will be the case. But if it's not, remember this example of the exponential we did in the last lecture-- we could use the delta method and things like that to estimate them. All right, so here's modeling 101. So the purpose of modeling is to restrict the base of possible distributions to a subspace that's actually plausible, but much simpler for me to estimate. So we went from all distributions on seven parameters, which is a large space-- that's a lot of things-- to something which is just one number. This number is positive. Any question about the purpose of doing this? OK, so we're going to have to do a little bit of formalism now. And so if we want to talk-- this is a statistics classroom. I'm not going to want to talk about the Poisson model specifically every single time. I'm going to want to talk about generic models. And then you're going to build to plug in your favorite word-- Poisson, binomial, exponential, uniform-- all these words that you've seen, you're going to be able to plug in there. But we're just going to have some generic notations and some generic terminology for a statistical model. All right? So here is the formal definition. So I'm going to go through it with you. OK, so the definition is that of a statistical model. OK? Sorry, that's a statistical experiment, I should say. So a statistical experiment is actually just a pair-- E. And that's a set-- and a family of distributions P theta, where theta ranges in some set capital theta. OK? So I hope you're up to date with your Greek letters. So the small theta is the capital theta. And enough of us-- I don't have the handwriting. So if you don't see something, just ask me. And so this thing now-- so each of this guy is a probability distribution. All right? So for example, this could be a Poisson with parameter theta or a Bernoulli with parameter theta-- OK, or an exponential with parameter-- I don't know-- 1 over theta squared if you want. OK, but they're just indexed by theta. But for each theta, this completely describes the distribution. It could be more complicated. This theta should be a pair-- could be a pair-- a mu sigma square. And that could actually give you some n mu sigma squared. OK so anything where you can actually-- rather than actually giving you a full distribution, I can compress into a parameter. But it could be worse. It could be this guy here. Right? Theta could be p1-- p larger than or equal to 7. And my distribution could just be something that has PMF-- p1-- p larger than 7. That's another parameter. This one is seven dimensional. This one is two dimensional. And all these guys are just one dimensional. All these guys are parameters. Is that clear? What's important here is that once they give you theta, you know exactly all the probabilities associated with this random variable. You know its distribution perfectly. So this is the definition. Is that clear? Is there a question about this distribution-- about this definition, sorry? All right. So really, the key thing is the statistical model associated to a statistical experiments. OK? So let's just see some examples. It's probably just better because, again, the formalism is never really clear. Actually, that's the next slide. OK, so there's two things we need to assume. OK, so the purpose of a statistical model is once I estimate the parameter, I actually know exactly what distribution it has, OK? So it means that I could potentially have several parameters that give me the same distribution that would still be fine, because I could estimate one guy. Or I could estimate the other guy. And I would still recover the underlying distribution of my data. The problem is that this creates really annoying theoretical problems, like things don't work, the algorithms won't work, the guarantees won't work. And so what we typically assume is that the model is so-called well-specified. Sorry, that's not well specified. I'm jumping ahead of myself. OK, well-specified means that your data-- the distribution of your data is actually one of those guys. OK? So some vocabulary-- so well-specified means that for my observations x, there exists a theta in capital theta such that x follows p sub theta. I should put a double bar. OK, so that's what well-specified means. So that means that the distribution of your actual data is just one of those guys. This is a bit strong of an assumption. It's strong in the sense that-- I don't know if you've heard of this sense, which I don't know, I can tell you who it's attributed to, but that probably means that this person did not come up with it. But I said that all models are wrong, but some of them are useful. All right, so all models are wrong means that maybe it's not true that this Poisson distribution that I assume for the number of siblings for college students-- maybe that's not perfectly correct. Maybe there's a spike at three, right? Maybe there's a spike at one, because you know, maybe those are slightly more educated families. They have less children. Maybe this is actually not exactly perfect. But it's probably good enough for our purposes. And when we make this assumption, we're actually assuming that the data really comes from a Poisson model. There is a lot of research that goes on about misspecified models and that tells you how well you're doing in the model that's the closest to the actual distribution. So that's pretty much it. Yeah? AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: So my data-- so it's always the way I denote one of the generic observations, right? So my observations are x1, xn. And they're IID with distribution p-- always. So x is just one of those guys. I don't want to write x5 or x4. They're IID. So they all have the same distribution. So OK-- no, no, no. They're all IID. So they all have the same p data. They'll have the same p, which means they'll have the same p data. So I can pick any one of them. So I'd just remove the index just so we're clear. OK? So when I write x, I just mean think of x1. Right they're an idea. I can pick whichever I want. I'm not going to write x1. It's going to be weird. OK? Is that clear? OK. So this particular theta is called the true parameter. Sometimes since we're going to want some variable theta, we might denote it by theta star as opposed to theta hat, which is always our estimator. But I'll keep it to be theta for now. And so the aim of this physical experiment is to estimate theta so that once I actually plug in theta in the form of my distribution, for example, I could plug in theta here. So theta here was actually lambda. So once I estimate this guy, I would plug it in, and I would know the probability that my random variable takes any value, by just putting the lambda hat and the lambda hat here. OK? So my goal is going to be to estimate this guy so that I can actually compute those distributions. But actually, we'll see, for example, when we talk about regression that this parameter actually has a meaning in many instances. And so just knowing the parameter itself intuitively or say more-- let's say more so than just computing probabilities, will actually tell us something about the process. For example, we're going to run linear regression. And when we do linear regression, there's going to be some coefficients in the linear regression. And the value of this coefficient is actually telling me what is the sensitivity of the response that I'm looking at to this particular input. All right? So just knowing if this number is larger or if this number is small is actually going to be useful for us to just look at this guy. All right? So there's going to be some instances where it's going to be important. Sometimes we're going to want to know if this parameter is larger or smaller than something or if it's equal to something or not equal to something. And those things are also important-- for example, if theta actually measures the true-- right? So theta is the true unknown parameter-- true efficacy of a drug. OK? Let's say I want to know what the true efficacy of a drug is. And what I'm going to want to know is maybe it's a score. Maybe I'm going to want to know if theta is larger than 2. Maybe I want to know if theta is the average number of siblings. Is this true number larger than 2 or not? Right? Maybe I am interested in knowing if college students come from-- so maybe from a sociological perspective, I'm interested in knowing if college students come from households with more than two children. All right, so those can be the questions that I may ask myself. I'm going to want to know maybe theta is going to be equal to 1/2 or not. So maybe for a drug efficacy, is it completely standard-- maybe for elections. Is the proportion of the population that is going to vote for this particular candidate equal to 0.5? Or is it different from 0.5? OK, and I can think of different things. When I'm talking about the regression, I'm going to want to test if this coefficient is actually 0 or not, because if it's 0, it means that the variable that's in front of it actually goes out. And so those are things we're testing. Actually having this very specific yes/no answer is going to give me a huge intuition or huge understanding of what's going on in the phenomenon that I observe. But actually, since the questions are so precise, it's going to be much more-- I'm going to be much better at answering them rather than giving you an estimate for theta with some confidence around it. All right, it's sort of the same principle as trying to reduce. What you're trying to do as a statistician is to inject as much knowledge about the question and about the problem that you can so that the data has to do a minimal job. And henceforth, you actually need less data. So from now on, we will always assume-- and this is because this is an intro stats class-- you will always assume that theta-- the subset of parameters is a subset of r to the d. That means that theta is a vector with at most a finite number of coordinates. Why do I say this? Well, this is called a parametric model. So it's called a parametric model or sometimes parametric statistics. Actually, we don't really talk about parametric statistics. But we talk a lot about nonparametric statistics or a non-parametric model. Can somebody think of a model which is non-parametric? For example, in the siblings example, if I did not cap the number of siblings to 7, but I let this list go to infinity, I would have an infinite number of parameters to estimate. Very likely, the last ones would be 0. But still, I would have an infinite number of parameters to estimate. So this would not be a parametric model if I just let this list of things to be estimated to be infinite. But there's other classes that are actually infinite and cannot represented by vectors. For example, function-- right? If I tell you my model, pf, is just the distribution of x's, the probability distributions, that have density f, right? So what I know is that the density is non-negative and that it integrates to one, right? That's all I know about densities. Well f is not something you're going to be able to describe with a finite number of values, right? All possible functions is the huge set. It's certainly not representable by 10 numbers. And so non-parametric estimation is typically when you actually want to parametrize this by a large class of functions. And so for example, histograms is the prime tool of non-parametric estimation, because when you fit a histogram to data, you're trying to estimate the density of your data, but you're not trying to represent it as a finite number of points. That's really-- I mean, effectively, you have to represent it, right? So you actually truncate somewhere and just say those things are not going to matter. All right? But really the key thing is that this is non-parametric where you have a potentially infinite number of parameters. Whereas we're going to only talk about finites. And actually finite in the overwhelming majority of cases is going to be 1. So theta is going to be a subset of r1. OK, we're going to be interested in estimating one parameter just like the parameter of a Poisson or the parameter of an exponential-- the parameter of Bernoulli. But for example, really, we're going to be interested in estimating mu and sigma square for the normal. So here are some statistical models. All right? So I'm going to go through them with you. So if I tell you I observe-- I'm interested in understanding-- I'm still [INAUDIBLE] I'm interested in understanding the proportion of people who kiss by bending their head to the right. And for that, I collected n observations. And I'm interested in making some inference in the statistical model. My question to you is, what is the statistical model? Well, if you want to read the statistical model, you're going to have to write this E-- oh, sorry, I never told you what E was. OK, well actually just go to the examples, and then you'll know what E is. So you're going to have to write to me an E and a p theta, OK? So let's start with the Bernoulli trials. So this e here is called the sample space. And in the normal people's words, it just means the space or the set in which x and-- back to your question, x is just a generic observation lips. OK, and hopefully, this is the smallest you can think of. OK, so for example, for Bernoulli trials, I'm going to observe a sequence of 0's and 1's. So my experiment is going to be-- as written on the board, is going to be 1, 0, 1. And then the probability distributions are going to be, well, it's just going to be the Bernoulli distributions indexed by p, right? So rather than writing p sub p, I'm going to write it as Bernoulli p, because it's clear what I mean when I write that. Is everybody happy? Actually, I need to tell you something more. This is a family of distributions. So I need p. And maybe I don't want to have to p that's a value 0, 1, right? It doesn't make sense. I would probably not look at this problem if I anticipated that everybody would kiss to the right. And everybody would kiss to the left. So I am going to assume that p is in 0, 1, but does not have 0 and 1. OK? So that's the statistical model for a Bernoulli trial. OK, now the next one, what do we have? Exponential. OK? OK, so when I have exponential distributions, what is the support of the exponential distribution? What value is it going to take? 0 to infinity, right? So what I have is that my first space is the value that my random variables can take. So its-- well, actually I can remove the 0 again-- 0 to plus infinity. And then the family of distributions that I have are exponential with parameter lambda. And again, maybe you've seen me switching from p, to lambda, to theta, to mu, to sigma square. Honestly you can do whatever you want. But its just that it's customary to have this particular group of letters. OK? And so the parameters of an exponential are just positive numbers. OK? And that's my exponential model. What is the third one? Can somebody tell me? Poisson, OK? OK, so Poisson-- is a Poisson random verbal discrete or continuous? Go back to your probability. All right, so the answer being the opposite of continuous-- good job. All right, so it's going to be-- what value can a Poisson take? All the natural integers, right? So 0, 1, 2, 3, all the way to infinity. We don't have any control of this. So I'm going to write this as n without 0. I think in the slides, it's n-star maybe. Actually, no, you can take value 0. I'm sorry. This actually takes value 0 quite a lot. That's typically, in many instances, actually the mode. So it's n, and then I'm going to write it as Poisson with parameter-- well, here it's again lambda as a parameter. And lambda can take any positive value. OK? And that's where you can actually see that the model that we had for the siblings-- right? So let me actually just squeeze in the siblings model here. So that was the bad model that I had in the first place when I actually kept this. Let's say we just kept it at 7. Forget about larger than or equal to 7. We just assumed it was 7. What was our sample space? We said 7. So it's 1, 2, to 7, right? Those were the possible values that this thing would take. And then what was my-- what's my parameter space? So it's going to be a nightmare write. But I'm going to write it. OK, so I'm going to write it as something like the probability that x is equal to k is equal to p sub k. OK? And that's going to be for p. OK, so that's for all k's, right? Or for k equal 1 to 7. And here the index is the set of parameters p1 to pk. And I know a little more about those guys, right? I know there are going to be non-negative-- PJ non-negative. And I know that they sum to 1. OK, so maybe writing this, you start seeing why we like those Poisson, exponential, and short notation, because I actually don't have to write the PMF of a Poisson. The Poisson is really just this. But I call it Poisson so I don't have to rewrite this all the time. And so here, I did not use a particular form. So I just have this thing, and that's what it is. The set of parameters is the set of positive numbers of-- p1 to p7, pk-- and sum to 7, right? And so this as just a list of numbers that are non-negative and sum up to 1. So that's my parameter space. OK? So here that's my theta. This whole thing here-- this is my capital theta. OK? So that's just the set of parameters that theta-- the set of parameters that theta is allowed to take. OK, and finally, we're going to end with the star of all, and that's the normal distribution. And in the normal distribution, you still have also some flexibility in terms of choices, because then naturally, the normal distribution is parametrized by-- the normal distribution is parametrized by two parameters, right? Mean and variance. So what values can a Gaussian random variable take? An entire real line, right? And the set of parameters that it can take it-- so this is going to be n, mu, sigma square. And mu is going to be positive. And stigma square is going-- sorry, m is going to be an r. And sigma square is going to be positive. OK, so again here, that's the way you're supposed to write it. If you really want to identify what theta is, well, theta formally is the set of mu sigma square such that-- well, in r times 0 infinity, right? That's just to be formal, but this does the job just fine. OK? You don't have to be super formal. OK, that's not three. That's like five. Actually, I just want to write another one. Let's call it 5-bit. And 5-bit is just Gaussian with known variants. And this arises a lot in labs when you have measurement error-- when you actually receive your measurement device. This thing has been tested by the manufacturer so much that it actually comes in on the side of the box. It says that the standard deviation of your measurements is going to be 0.23. OK, and actually why you do this is because we can brag about accuracy, right? That's how they sell you this particular device. And so you actually know exactly what sigma square is. So once you actually get your data in the lab, you actually only have to estimate mu, because stigma comes on the label. So now, what is your statistical model? Well, the numbers are collecting still in r. But now, the models that I have is n, mu, sigma squared. But the parameter space is not mu, and r, and sigma positive. It's just mu and r. And to be a little more emphatic about this, this is enough to describe it, right? Because if sigma is the sigma that was specified by the manufacturer, then this is the sigma you want. But you can actually write sigma is equal to-- sigma square is equal to sigma square manufacturer. Right? You can just fix it to be this particular value. Or maybe you don't want to write that index that's the manufacturer. And so you just say, well, the sigma-- when I write n squared what I mean is the sigma square from the manufacturer. Yeah? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah. For a particular measuring device? You know, you're in a lab, and you have some measuring device. I don't know-- something that measures tensile strength of something. And it's just going to measure something. And it will naturally make errors. But it's been tested so much by the manufacturer and calibrated by them. They know it's not going to be perfect. But they knew exactly what error it was making, because they've actually tried it on things for which they exactly knew what the tensile strength was. OK? Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: This? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh, like that's pointing to-- 5 prime? OK? And we can come up with other examples, right? So for example, here's another one. So the names don't really matter, right? I call it the siblings model. But you won't find the siblings model in the textbook, right? So I wouldn't worry too much. But for example, let's say you have something-- so let's call it 6. You have-- I don't know-- a truncated-- and that's the name I just came up with. But it's actually not exactly describing what I want. But let's say I observe y, which is the indicator of x larger than say 5 when x follows some exponential with parameter lambda. OK? This is what I get to observe. I only observe if my waiting time was more than five minutes, because I see somebody coming out of the Kendall Station being really upset. And that's all I record is I've been waiting for more than five minutes. And that's all I get to record. OK? That happens a lot. These are called censored data. I should probably not call it truncated, but this should be censored. OK? You see a lot of censored data when you ask people how much they make. They say, well, more than five figures. And that's all they want to tell you. OK? And so you see a lot of censored data in survival analysis, right? You are trying to understand how long your patients are going to live after some surgery, OK? And maybe you're not going to keep people alive, and you're not going to actually be in touch in their family every day and ask them, is the guy still alive? And so what you can do is just you ask people maybe five years after your study and say, please, come in. And you will just happen to have some people say, well, you know, the person is deceased. And you will only be able to know that the person deceased less than five years ago. But you only see what happens after that, OK? And so this is this truncated and censored data. It happens all the time just because you don't have the ability to do better than that. So this could happen here. So what is my physical experiment, right? So here, I should probably write this like this, because I just told you that my observations are going to be x, but there is some unknown y. I will never get to see this y. I only get to see the x. What is my statistical experiment? Please help me. So is it the real line? My sample space-- is it the real line? Sorry, who does not know what this means? I'm sorry. OK. So this is called an indicator. So I read it as-- if I wrote well, that would be one with a double bar. You can also write i if you prefer if you don't feel like writing one in double bars. And it's one of say-- I'm going to write it like that-- 1 of a is equal to 1 if a is true and 0 if a is false. OK? So that means that if y is larger than 4, this thing is 1. And if y is not larger than 5, this thing is 0. OK. So that's called an indicator-- indicator function. It was very useful to just turn anything into a 0, 1. So now that I'm here, what is my sample space? 0, 1. Well, whatever this thing I did not tell you was taking value with the thing you should have-- if I end up telling you that is taking value 6 or 7 that would be your sample space, OK? OK, so it takes values 0, 1. And then what is the probability here? What should I write here? What should you write without even thinking? Yeah. So let's assume there's two seconds before the end of the exam. You're going to write Bernoulli. And that's where you're going to start checking if I'm going to give you extra time, OK? So you write Bernoulli without thinking, because it's taking value 0, 1. So you just write Bernoulli, but you still have to tell me what possible parameters this thing is taking, right? So I'm going to write it p, because I don't know. And then p take value-- OK, so sorry. I could write it like that. Right? That would be perfectly valid, but actually no more. It's not any p. The p is the probability that an exponential lambda is larger than 5. And maybe I want to have lambda as a parameter. OK, so what I need to actually compute is, what is the probability that y is larger than 5-- when y is this exponential lambda, which means that what I need to compute is the integral between 5 and infinity of-- what is it? 1 over lambda. How did I define it in this class? Did I change it-- what? AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: Yeah, right, right, right. Yeah. Lambda e to the minus lambda x dx, right? So that's what I need to compute. What is this? Yeah, so what is the value of this integral? Can you take appropriate measures? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: OK? And again, you can cancel this, right? So when I'm going to integrate this guy, those guys are going to cancel. I'm going to get 0 for infinity. I'm going to get a 5 for this guy. And well, I know it's going to be positive number, so I'm not really going to bother with the signs, because I know that's what it should be. OK, so I get e to the minus 5 lambda. And so that means that I can actually write this like that-- and now parametrize this thing by lambda positive. OK? So what I did here is I changed the parametrization from p to lambda. Why? Well, because maybe if I know this is happening, maybe I am actually interested in reporting lambda to MBTA, for example. Maybe I'm actually trying to estimate 1 over lambda, so that I know it is-- well, lambda is actually the intensity of arrival of my Poisson process, right? I have a Poisson process. That's how my trains are coming in. And so I'm interested in lambda. So I will parametrize things by lambda. So the thing I get is lambda. You can play with this, right? I mean, I could parametrize this by 1 over lambda and put 1 over lambda here if I want it. But you know, the context of your problem will tell you exactly how to parametrize this. OK? So what else did I want to tell you? OK, let's do a final one. By the way, are you guys OK with Poisson exponential, Bernoulli's-- I don't know, binomial, normal-- all these things. I'm not going to go back to it, but I'm going to use them heavily. So just spend five minutes on Wikipedia if you forgot about what those things are. Usually, you must have seen them the in your probability class. So they should not be a crazy name. And again, I'm not expecting you. I don't remember what the density of an exponential is. So it would be pretty unfair of me to actually ask you to remember what it is. Even for the Gaussian, I don't expect you to remember what it is. But I want you to remember that if I add 5 to a Gaussian, then I have a Gaussian with me and mu plus 5 if I multiply it by something, right? You need to know how to operate those things. But knowing complicated densities is definitely not part of the game. OK? So let's do a final one. I don't know what number I have now. I'm going to just do uniform. That's another one. Everybody knows what uniform is? So it's uniform, right? So I'm going to have x, which my observations are going to be uniform on the interval 0 theta, right? So if I want to define a uniform distribution for a random variable, I have to tell you which interval or which set I want it to be uniform on. And so here I'm telling you is the interval 0 theta. And so what is going to be my sample space? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: I'm sorry? 0 to theta. And then what is my probability distribution? My family of parameters? So well, I can write it like this, right? Uniform theta, right? And theta let's say is positive. Can somebody tell me what's wrong with what I wrote? This makes no sense. Tell me why. Yeah? Yeah, this set depends on theta, and why is that a problem? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: There is no theta. Right now, there's the families of theta. Which one did you pick here? Right, this is just something that's indexed by theta, but I could have very well written it as, you know, just not being Greek for a second, I could have just written this as t rather than theta. That would be the same thing. And then what the hell is theta? There's no such thing as theta. We don't know what the parameter is. This parameter should move with everyone. And so that means that I actually am not allowed to pick this theta. I'm actually-- just for the reason that there is no parameter to put on the left side-- there should not be, right? So you just said, well, there's a problem because the parameter is on the left-hand side. But there's not even a parameter. I'm describing the family of possible parameters. There is no one that you can actually plug it in. So this should really be 1. And I'm going to go back to writing this as theta because that's pretty standard. Is that clear for everyone. I cannot just pick one and put it in there and just take the-- before I run my experiments, I could potentially get numbers that are all the way up to 1, because I don't know what theta is going to be ahead of time. Now, if somebody promised to me that theta was going to be less than 0.5, that would be-- sorry, why do I put 1 here? I could put theta between 0 and 1. But if somebody is going to promise me, for example, if theta is going to be less than 1, then you expect to put 0, 1. All right? Is that clear? OK, so now you know how to answer the question-- what is the statistical model? And again, within the scope of this class, you will not be asked to just come up with a model right that will just tell you. Poisson would be probably be a good idea here. And then you would just have to trust me that indeed it would be a good idea. All right, so what I started talking about 20 minutes ago-- so it's definitely ahead of myself is the notion-- so that's when I was talking about well-specified. Remember, well-specified says that the true distribution is one of the distributions in this parametric families of distribution. The true distribution of my siblings is actually a Poisson with some parameters. And all I need to figure out is what this parameter is. When I started saying that, I said, well, but then that could be that there are several parameters that give me the same distribution, right? It could be the case that Poisson 5 and Poisson 17 are exactly the same distributions when I started putting those numbers in the formula which I erased, OK? So it could be the case that two different numbers would give me exactly the same probabilities. And in this case, we see that the model is not identifiable. I mean, the parameter is not identifiable. I cannot identify the parameter, even if you actually gave me an infinite amount of data, which means that I could actually estimate exactly the PMF. I might not be able to go back, because there would be several candidates, and I would not be able to tell you which one it was in the first place. OK? So what we want is that this function-- theta maps to p theta is injective. And that really can be fancy. What I really mean is that if theta is different from theta prime, then p of theta is different from p of theta prime. Or, if you prefer to think about the contrapositive of this, this is the same as saying that if p theta gives me the same distribution as theta prime, then that implies that theta must be equal to the theta prime. The logic of those two things are equivalent, right? So that's what this means. So this is-- we say that the parameter is identifiable or identified-- it doesn't really matter-- in this model. And this is something we're going to want. OK? So in all the examples that I gave you, those parameters are completely identified. Right? If I tell you-- I mean, if those things are in probability box, it means that they were probably thought through, right? So when I say exponential lambda, I'm really talking about one specific distribution and not-- there's not another lambda going to give you exactly the same distribution. OK so that was the case. And you can check that, but it's a little annoying. So I would probably not do it. But rather than doing this, let me just give you some examples where it would not be the case. Again, here's an example, if I take xi-- so now I'm back to just using this indicator function-- but now for a Gaussian. So what I observe is x is the indicator that y is, what did we say? Positive. OK? So this is a Bernoulli random variable, right? And it has some parameter p. But p now is going to depend-- sorry, and here y is n mu sigma square. So the p, the probability that this thing is positive, is actually-- I don't think I put the 0. Oh, yeah, because I have mu. OK, so this distribution-- this p the probability that it's positive is just the probably that some Gaussian is positive. And it will depend on mu and sigma, right? Because if I draw a 0, and I draw my Gaussian around mu, then the probability of this Bernoulli being 1 is really the area under the curve here. Right? And this thing-- well, if mu is very large, it's going to become very large. If mu is very small, it's going to become very small. And if sigma changes, it's also going to effect-- is that clear for everyone? But we can actually compute this, right? So the parameter p that I'm looking for here as a function of mu and sigma is simply the probability that some y is non-negative, which is the probability that y minus mu divided by sigma is larger than minus mu divided by sigma. But when you study probability, is that some operation you were used to making? Removing the mean and dividing by the standard deviation? What is the effect of doing that on the Gaussian random variable? Yeah, so you normalize it, right? And you standardize it. You make it a standard Gaussian. You remove the mean. The mean 0 is Gaussian. And you remove the variance for it to become 1. So when you have a Gaussian, remove the mean and divide by the standard deviation, it becomes a standard Gaussian-- which this thing has n , 0, 1 distribution, which is the one you can read the quintiles of at the end of the book. Right? And that's exactly what we did. OK? So now you have the probability that some standard Gaussian exceeds negative mu over sigma, which I can write in terms of the cumulative distribution function, capital phi-- like we did in the first lecture. So if I do this cumulative distribution function, what is this probability in terms of phi? [INAUDIBLE]? AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: Well, that's what your name tag says. 1 minus-- AUDIENCE: [INAUDIBLE]. PHILLIPPE RIGOLLET: 1 minus mu of sigma. What happens with phi in our-- do you think I defined this for fun? 1 minus phi of mu over sigma, right? Right? Because this is 1 minus the probability that it's less than this. And this is exactly the definition of the cumulative distribution function. So in particular, this thing only depends on mu over sigma. Agreed? So in particular, if I had 2 mu over 2 sigma, p would remain unchanged. If I have 12 mu over 12 sigma, this thing would remain unchanged, which means that p does not change if I scale mu and sigma by the same factor. So there's no way just by observing x, even an infinite times, so that I can actually get exactly what p is. I'm never going to be able to get mu and sigma separately. All I'm going to be able to get is mu over sigma. So here, we say that mu sigma-- the parameter mu sigma-- or actually each of them individually-- those guys-- they're not identifiable. But the parameter mu over sigma is identifiable. So if I wanted to write a statistical model in which the parameter is identifiable-- I would write 0, 1 Bernoulli. And then I would write 1 minus phi over mu over sigma. And then I would take two parameters, which are mu and r and sigma squared positive. So let's write sigma positive. Right? No, this is not identifiable. I cannot write those two guys as being two things different. Instead, what I want to write is 0, 1, Bernoulli 1 minus-- and now my parameter-- I forgot this-- my parameter is mu over sigma. Can somebody tell me where mu over sigma lives? What values can this thing take? Any real value, right? OK, so now I've done this definitely out of convenience, right? Because that was the only thing I was able to identify-- the ratio of mu over sigma. But it's still something that has some meaning. It's the normalized mean. It really tells me what the mean is compared to the standard deviation. So in some models, in reality, in some real applications, this actually might have a good meaning. It's just telling me how big the mean is compared to the standard fluctuations of this model. But I won't be able to get more than that. Agreed? All right? So now that we've set a parametric model, let's try to see what our goals are going to be. OK? So now we have a sample and a statistical model. And we want to estimate the parameter theta, and I could say, well, you know what? I don't have time for this analysis. Collecting data is going to take me a while. So I'm just going to mmm-- and I'm going to say that mu over sigma is 4. And I'm just going to give it to you. And maybe you will tell me, yeah, it's not very good, right? So we need some measure of performance of a given parameter. We need to be able to evaluate if eyeballing the problem is worse than actually collecting a large amount of theta. We need to know if even if I come up with an estimator that actually sort of uses the data, does it make an efficient use of the data? Would I actually need 10 times more observations to achieve the same accuracy? To be able to answer these questions, well, I need to define what accuracy means. And accuracy is something that sort of makes sense. It says, well, I want theta. I have to be close to theta. And the theta is a random variable. So I'm going to have to understand what it means for a random variable to be close to a deterministic number. And so, what is a parameter estimator, right? So I have an estimator, and I said it's a random variable. And the formal definition-- so an estimator is a measurable function of the data. So when I write theta hat, and that will typically be my notation for an estimator, right? I should really write theta hat of x1 xn. OK? That's what an estimator is. If you want to know an estimator is, this is a measurable function of the data. And it's actually also known as a statistic. And you know, if you're interested in, you know, I see every day I think when I have like, you know, a dinner with normal people. And they say I'm a statistician. Oh, yeah, I really like baseball. And they talk to me about batting averages. That's not what I do. But for them, that's what it is, and that's because in a way, that's what a statistic is. A batting average is a statistic. OK, and so here are some examples. You can take the average xn bar. You can take the maximum of your observation. That's the statistics. You can take the first one. You can take the first one plus log of 1 plus the absolute value of the last one. You can do whatever you want that will be an estimator. Some of them are clearly going to be bad. But that's still a statistic, and you can do this. Now, when I say measurable, I always have-- so you know, graduate students sometimes ask me like, yeah, how do I know if this estimator is measurable or not. And usually, my answer is, well, if I give you data, can you compute it. And they say, yeah, I'm like, well, then it's measurable. That's a very good rule to check if you can actually-- if something is actually measurable. When is this thing non-measurable? It's when it's implicitly defined. OK, and in particular, the things that give you problems are-- sup or inf. Anybody knows what a sup or an inf is? It's like a max or a min. But it's not always attained. OK, so if I have x1. So if I look at the infimum of the function f of x for x on the real line and f of x, sorry, let's say x on the 1 infinity. And f of x is equal to 1 over x. Right? Then the infimum is the smallest value we can take except that it doesn't really take it at 0 right, because 1 over x is going to 0. But it's never really getting there. So we just called the inf 0. But it's not the value that it ever takes. And these things might actually be complicated to compute. And so that's when you actually have problems, right? When the limit is not-- you're not really quite reaching the limit. You won't have this problem in general, but just so you know, an estimator is not really anything. It has to actually be measurable. OK, so the first thing we want to know I mentioned it-- so an estimator is a statistic which does not depend on theta, of course. So if I give you the data, you have to be able to compute it. And that probably should not require not knowing any known parameters. OK, so an estimator is said to be consistent. When my data-- when I collect more and more data, this thing is getting closer and closer to the true parameter. All right? And we said that eyeballing and saying that it's going to be 4 is not really something that's probably going to be consistent. But they can have things that are consistent but that are converging to theta at different speeds. OK? And we know also that this is a random variable. It converges to something. And there might be some different notions of convergence that kick in. And actually there are. And we say that it's weakly convergent if it converges in probability and strongly convergent if it converges almost [INAUDIBLE].. OK? And this is just vocabulary. It won't make a big difference. OK? So we will typically say it's consistent with any of the two. AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: Well, so in parametric statistics, it's actually a little difficult to come up with. But in non-parametric ones, I could just say, if I had xi, yi, and I know that yi is f of xi plus noise s1i. And I know that f belongs to some class of function, let's say-- [INAUDIBLE] class of smooth functions-- it's massive. And now, I'm going to actually find the following estimator. I'm going to take the average. So I'm going to do least squares, right? So I just check. I'm trying to minimize the distance of each of my f of xi to my yi. And now, I want to find the smallest of them. So if I look at the infimum here, then the question is-- so that could be-- well, that's not really an estimator for f. But it's an estimator for the smallest possible value. And so for example, this is actually an estimator for the variance of sigma square. This might not be attained, and this might not be measurable if f is massive? All right, so that's the infimum over some class f of x. OK? So those are all voice things that are defined implicitly. If it's an average, for example, it's completely measurable. OK? Any other question? OK, so we know that the first thing we might want to check, and that's definitely something we want about estimators that is consistent, because all consistency tells us is that just as I collect more and more data, my estimator is going to get closer and closer to the parameter. There's other things we can look at. For each possible value of n-- now, right now, I have a finite number of observations-- 25. And I want to know something about my estimator. The first thing I want to check is maybe if in average, right? So this is a random variable. Is this random variable in average going to be close to theta or not? And so the difference how far I am from theta is actually called the bias. So the bias of an estimator is the expectation of theta hat minus the value that I hope it gets, which is theta. If this thing is equal to 0, we say that theta hat is unbiased. And unbiased estimators are things that people are looking for in general. The problem is that there's lots of unbiased estimators. And so it might be misleading to look for unbiasedness when that's not really the only thing you should be looking for. OK, so what does it mean to be unbiased? Maybe for this particular round of data you collected, you're actually pretty far from the true estimator. But one thing that actually-- what it means is that if I redid this experiment over, and over, and over again, and I averaged all the values of my estimators that I got, then this would actually be the right-- the true parameter. OK. That's what it means. If I were to repeat this experiment, in average, I would actually get the right thing. But you don't get to repeat the experiment. OK, just a remark about estimators, look at this estimator-- xn bar. Right? Think of the kiss example. I'm looking at the average of my observations. And I want to know what the expectation of this thing is. OK? Now, this guy is by linearity of the expectation, it is this, right? But my data is identically distributed. So in particular, all the xi's have the same expectation, right? Everybody agrees with this. When it's identically distributed, they'll get the same expectation. So what it means is that this guy's here-- they're all equal to the expectation of x1. Right? So what it means is that these guys-- I have the average of the same number. So this is actually the expectation of x1. OK? And it's true. In the kiss example, this was p. And this is p-- the probability of turning your head right. OK? So those two things are the same. In particular, that means that xn bar and just x1 have the same bias. So that should probably illustrate to you that bias is not something that really is telling you the entire picture, Right? I can take only one of my observations-- Bernoulli 0, 1. This thing will have the same bias as if I average 1,000 of them. But the bias is really telling you where I am in average. But it's really not telling me what fluctuations I'm getting. And so if you want to start having fluctuations coming into the picture, we actually have to look at the risk or the quadratic risk of the estimator. And so the quadratic risk is the finest-- the expectation of the square distance between theta hat and theta. OK? So let's look at this. So the quadratic risk-- sometimes it's denoted that people call it the l2 risk of theta hat, of course. I'm sorry for maintaining such an ugly board. [INAUDIBLE] this stuff. OK, so I look at the square distance between theta hat and theta. This is still-- this is the function of a random variable. So it's a random variable as well. And now I'm looking at the expectation of this guy. That's the definition. I claimed that when this thing goes to 0, then my estimator is actually going to be consistent. Everybody agrees with this? So if it goes to zero as n goes to infinity-- and here, I don't need to tell you what kind of convergence I have, because this is just the number, right? It's an expectation. So it's a regular, usual calculus-style convergence. Then that implies that theta hat is actually weakly consistent. What did I use to tell you this? Yeah, this is the convergence in l2. This actually is strictly equivalent. This is by definition saying that theta hat converges in l2 to theta. And we know that convergence in l2 implies convergence in credibility to theta. That was the picture. We're going up. And this is actually equivalent to a consistency by definition-- a weak consistency. OK, so this is actually telling you a little more because this guy here-- they are both unbiased. Theta xn bar is unbiased. X1 is unbiased. But x1 is certainly not consistent , because the more data I collect, I'm not even doing anything with it. I'm just taking the first data point you're giving to me. So they're both unbiased. But this one is not consistent. And this one we'll see is actually consistent. xn bar is consistent. And actually, we've seen that last time. And that's because of the? What guarantees the fact that xn bar is consistent? AUDIENCE: The law of large numbers. PHILIPPE RIGOLLET: The law of large numbers, right? Actually, it's strongly consistent if you have a strong [INAUDIBLE].. OK, so just in the last two minutes, I want to tell you a little bit about how this risk is linked to see, quadratic risk is equal to bias squared plus the variance. So let's see what I mean by this? So I'm going to forget about the absolute values that you have a square. I don't really need them. If theta hat was unbiased, this thing would be the expectation of theta hat. It might not be the case. So let me see how I can actually see-- put the bias in there. Well, one way to do this is to see that this is equal to the expectation of theta hat minus the expectation of theta hat, plus the expectation of theta hat minus theta. OK? I just removed the same and added the same thing. So I didn't change anything. Now, this guy is my bias, right? So now let me expand the square. So what I get is the expectation of the square of theta hat minus its expectation. I should put some square brackets-- plus two times the cross-product. So the cross-product is what expectation of theta hat minus the expectation of theta hat times expectation of theta hat minus theta. And then I have the last square. Expectation of theta hat minus theta squared. OK? So square, cross-products, square. Everybody is with me? now this guy here-- if you pay attention, this thing is the expectation of some random variable. So it's a deterministic number. Theta is the true parameter. It's a deterministic number. So what I can do is pull out this entire thing out of the expectation like this and compute the expectation only with respect to that part. But what is the expectation of this thing? It's zero, right? The expectation of theta hat minus the expectation of theta hat is 0. So this entire thing is equal 0. So now when I actually collect back my quadratic terms-- my two squared terms in this expansion-- what I get is that the expectation of theta hat minus theta squared is equal to the expectation of theta hat minus expectation of theta hat squared plus the square of expectation of theta hat minus theta. Right? So those are just the two-- the first and the last term of the previous equality? Now, here I have the expectation of the square of the difference between a random variable and its expectation. This is otherwise known as the variance, right? So this is actually equal to the variance of theta hat. And well, this was the bias. We already said that's there. So this whole thing is the bias square. OK? And hence the quadratic term is the sum of the variance and the squared bias. Why squared bias? Well, because otherwise, you would add dollars in dollars squared. So you need to add dollars squared and dollars squared so that this thing is actually homogeneous. So if x is in dollars, then the bias is in dollars, but the variance is in dollars squared. OK, and the square here forced you to put everything on the square scale. All right, so what's nice is that if the quadratic risk goes to 0, then since I have the sum of two positive terms, both of them have to go to 0. That means that my variance is going to 0-- very little fluctuations. And my bias is also going to 0, which means that I'm actually going to be on target once I reduce my fluctuations, because it's one thing to reduce the fluctuations. But if I'm not on target, it's an issue, right? For example, the estimator for the value 4 has no variance. Every time I'm going to repeat the experiments, I'm going to get 4, 4, 4, 4-- variance is 0. But the bias is bad. The bias is 4 minus theta. And if theta is far from 4, that's not doing very well. OK, so next week, we will-- we'll talk about what is a good estimate-- how estimators change if they have high variance or low variance or high bias and low bias. And we'll talk about confidence intervals as well.
MIT_18650_Statistics_for_Applications_Fall_2016
24_Generalized_Linear_Models_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: So I apologize. My voice is not 100%. So if you don't understand what I'm saying, please ask me. So we're going to be analyzing-- actually, not really analyzing. We described a second-order method to optimize the log likelihood in a generalized linear model, when the parameter of interest was beta. So here, I'm going to rewrite the whole thing as a beta. So that's the equation you see. But we really have this beta. And at iteration k plus 1, beta is given by beta k. And then I have a plus sign. And the plus, if you think of the Fisher information at beta k as being some number-- if you were to say whether it's a positive or a negative number, it's actually going to be a positive number, because it's a positive semi-definite matrix. So since we're doing gradient ascent, we have a plus sign here. And then the direction is basically gradient ln at beta k. OK? So this is the iterations that we're trying to implement. And we could just do this. At each iteration, we compute the Fisher information, and then we do it again and again. All right. That's called the Fisher-scoring algorithm. And I told you that this was going to converge. And what we're going to try to do in this lecture is to show how we can re-implement this, using iteratively re-weighted least squares, so that each step of this algorithm consists simply of solving a weighted least square problem. All right. So let's go back quickly and remind ourselves that we are in the Gaussian-- sorry, we're in the exponential family. So if I look at the log likelihood for one observation, so here it's ln-- sorry. This is the sum from i equal 1 to n of yi minus-- OK, so it's yi times theta i, sorry, minus b of theta i. Then there's going to be some parameter. And then I have plus c of yi phi. OK. So just the exponential went away when I took the log of the likelihood. And I have n observations, so I'm summing over all n observations. All right. Then we had a bunch of formulas that we came up to be. So if I look at the expectation of yi-- so that's really the conditional of yi, given xi. But like here, it really doesn't matter. It's just going to be different for each i. This is denoted by mu i. And we showed that this was beta prime of theta i. Then the other equation that we found was that. And so what we want to model is this thing. We want it to be equal to xi transpose beta- sorry g of this thing. All right. So that's our model. And then we had that the variance was also given by the second derivative. I'm not going to go into it. What's actually interesting is to see, if we want to express theta i as a function of xi, what we get, going from xi to mu i by g inverse, and then to theta i by b inverse, we get that theta i is equal to h of xi transpose beta, h of xi transpose beta, where h is the inverse-- so which order is --this? Is the inverse of g, and then the compose would be prime. OK? So we remember that last time, those are all computations that we've made, but they're going to be useful in our derivation. And the first thing we did last time is to show that, if I look now at the derivative of the log likelihood with respect to one coordinate of beta, which is going to give me the gradient if I do that for all the coordinates, what we ended up finding is that we can rewrite it in this form, some of yi tilde minus mu tilde. So let's remind ourselves that-- so y tilde is just y divided-- well, OK y tilde i is yi-- is it times or divided-- times g prime of mu i. Mu tilde i is mu i times g prime of mu i. And then that was just an artificial thing, so that we could actually divide the weights by g prime. But the real thing that built the weights are this h prime. And there's this normalization factor. And so if we read it like that-- so if I also write that wi is h prime of xi transpose beta divided by g prime of mu i times phi, then I could actually rewrite my gradient, which is a vector, in the following matrix form, the gradient ln at beta. So the gradient of my log likelihood of beta took the following form. It was x transpose w, and then y tilde minus mu tilde. And here, w was just the matrix with w1, w2, all the way to wn on the diagonal and 0 on of the up diagonals. OK? So that was just taking the derivative and doing a slight manipulations that said, well, let's just divide whatever is here by g prime and multiply whatever is here by g prime. So today, we'll see why we make this division and multiplication by g prime, which seems to make no sense, but it actually comes from the Hessian computations. So the Hessian computations are going to be a little more annoying. Actually, let me start directly with the coordinate y's derivative, right? So to build this gradient, what we used, in the end, was that the partial derivative of ln with respect to the gth coordinate of beta was equal to the sum over i of yi tilde minus mu i tilde times wi times the gth coordinate of xi. OK? So now, let's just take another derivative, and that's going to give us the entries of the Hessian. OK, so we're going to the second derivative. So what I want to compute is the derivative with respect to beta j and beta k. OK. So where does beta j-- so here, I already took the derivative with respect to beta j. So this is just the derivative with respect to beta k of the derivative with respect to beta j. So what I need to do is to take the derivative of this guy with respect to beta k. Where does beta k show up here? It's set in, in two places. AUDIENCE: In the y's? PHILIPPE RIGOLLET: No, it's not in the y's. The y's are my data, right? But I mean, it's in the y tildes. Yeah, because it's in mu, right? Mu depends on beta. Mu is g inverse of xi transpose beta. And it's also in the wi's. Actually, everything that you see is directly-- well, OK, w depends on mu n on beta explicitly. But the rest depends only on mu. And so we might want to be a little-- well, we can actually use the-- did I use the chain rule already? Yeah, it's here. But OK, well, let's go for it. Oh yeah, OK. Sorry, I should not write it like that, because that was actually-- right, so I make my life miserable by just multiplying and dividing by this g prime of mu. I should not do this, right? So what I should just write is say that this guy here-- I'm actually going to remove the g prime of mu, because I just make something that depends on theta appear when it really should not. So let's just look at the last but one equality. OK. So that's the one over there, and then I have xi j. OK, so here, it make my life much more simple, because yi does not depend on beta, but this guy depends on beta, and this guy depends on beta. All right. So when I take the derivative, I'm going to have to be a little more careful now. But I just have a derivative of a product, nothing more complicated. So this is what? Well, the sum is going to be linear, so it's going to come out. Then I'm going to have to take the derivative of this term. So it's just going to be 1 over psi. Then the derivative of mu i with respect to beta k, which I will just write like this, times h prime of xi transpose beta xi j. And then I'm going to have the other one, which is yi minus mu i over 5 times the second derivative of h of xi transpose beta. And then I'm going to take the derivative of this guy with respect to beta j with beta k, which is just xi k. So I have xi j times xi k. OK. So I still need to compute this guy. So what is the partial derivative with respect to beta k of g? So mu is g of-- worry, it's g inverse of xi transpose beta. OK? So what do I get? Well, I'm going to get definitely the second derivative of g. Well, OK, that's actually not a bad idea. Well, no, that's OK. I can make the second-- what makes my life easier, actually? Give me one second. Well, there's no one that actually makes my life so much easier. Let's just write it. Let's go with this guy. So it's going to be g prime prime of xi transpose beta times xi k. OK? So now, what do I have if I collect my terms? I have that this whole thing here, the second derivative is, well, I have the sum from 1 equal 1 to n. Then I have terms that I can factor out, right? Both of these guys have xi j, and this guy pulls out an xi k. And it's also here, xi j times xi k, right? So everybody here is xi j xi k. And now, I just have to take the terms that I have here. The 1 over phi, I can actually pull out in front. And I'm left with the second derivative of g times the first derivative of h, both taken at xi transpose beta. And then, I have this yi minus mu i times the second derivative of h, taken at xi transpose beta. OK. But here, I'm looking at Fisher scoring. I'm not looking at Newton's method, which means that I can actually take the expectation of the second derivative. So when I start taking the expectation, what's going to happen-- so if I take the expectation of this whole thing here, well, this guy, it's not-- and when I say expectation, it's always conditionally on xi. So let's write it-- x1 xn. So I take conditional. This is just deterministic. But what is the conditional expectation of yi minus mu i times this guy, conditionally on xi? 0, right? Because this is just the conditional expectation of yi, and everything else depends on xi only, so I can push it out of the conditional expectation. So I'm left only with this term. OK. So now I need to-- sorry, and I have xi xj, xi j xi j. OK So now, I want to go to something that's slightly more convenient for me. So maybe we can skip that part here, because this is not going to be convenient for me anyway. So I just want to go back to something that looks eventually like this. OK, that's what I'm going to want. So I need to have my xi show up with some weight somehow. And the weight should involve h prime divided by g prime. Again, the reason why I want to see g prime coming back is because I had g prime coming in the original w. This is actually the same definition as the w that I used when I was computing the gradient. Those are exactly these w's, those guys. So I need to have g prime that shows up. And that's where I'm going to have to make a little bit of computation here. And it's coming from this kind of consideration. OK? So this thing here-- well, actually, I'm missing the phi over there, right? There should be a phi here. OK. So we have exactly this thing, because this tells me that, if I look at the Hessian-- so this was entry-wise, and this is exactly the form of something of the form of the k. This is exactly the jth kth entry of xi xi transpose. Right? We've used that before. So if I want to write this in a vector form, this is just going to be the sum of something that depends on i times xi xi transpose. So this is 1 over phi sum from i equal 1 to n of g prime prime xi transpose beta h prime xi transpose beta xi xi transpose. OK? And that's for the entire matrix. Here, that was just the j kth entries of this matrix. And you can just check that, if I take this matrix, the j kth entry is just the product of the jth coordinate and the kth coordinate of xi. All right. So now I need to do my rewriting. Can I write this? So I'm missing something here, right? Oh, I know where it's coming from. Mu is not g prime of x beta. Mu is g inverse of x beta, right? So the derivative of x prime is not g prime prime. It's like this guy-- no, 1 over this, right? Yeah. OK? The derivative of g inverse is 1 over g prime of gene inverse. I need you guys, OK? All right. So now, I'm going to have to rewrite this. This guy is still going to go away. It doesn't matter, but now this thing is becoming h prime over g prime of g inverse of xi transpose beta, which is the same here, which is the same here. OK? Everybody approves? All right. Well, now, it's actually much nicer. What is g inverse of xi transpose beta? Well, that was exactly the mistake that I just made, right? It's mu i itself. So this guy is really g prime of mu i. Sorry, just the bottom, right? So now, I have something which looks like a sum from i equal 1 to n of h prime of xi transpose beta, divided by g prime of mu i phi times xi xi transpose, which I can certainly write in matrix form as x transpose wx, where w is exactly the same as before. So it's w1 wn. And wi is h prime of xi transpose beta divided by g prime of mu i. There's a prime here times phi, which is the same that we had here. And it's supposed to be the same that we have here, except the phi is in white. That's why it's not there. OK. All right? So it's actually simpler than what's on the slides, I guess. All right. So now, if you pay attention, I actually never force this g prime of mu i to be here. Actually, I even tried to make a mistake to not have it. And so this g prime of mu i shows up completely naturally. If I had started with this, you would have never questioned why I actually didn't multiply by g prime and divided by g prime completely artificially here. It just shows up naturally in the weights. But it's just more natural for me to compute the first derivative first than the second derivative second, OK? And so we just did it the other way around. But now, let's assume we forgot about everything. We have this. This is a natural way of writing it, x transpose wx. If I want something that involves some weights, I have to force them in by dividing by g prime of mu i and therefore, multiplying yi n mu i by this wi. OK? So now, if we recap what we've actually found, we got that-- let me write it here. We also have that the expectation of H ln of beta x transpose xw. So if I go back to my iterations over there, I should actually update beta k plus 1 to be equal to beta k plus the inverse. So that's actually equal to negative i of beta k-- well, yeah. That's negative i of beta, I guess. Oh, and beta here shows up in w, right? w depends on beta. So that's going to be beta k. So let me call it wk. So that's the diagonal of H prime xi transpose beta k, this time, divided by g prime of mu i k phi. OK? So this beta k induces a mu by looking at g inverse of xi transpose beta k. All right. So mu i k is g inverse of xi transpose beta k. So that's 2 to the-- sorry, that's an iteration. And so now, if I actually write these things together, I get minus x transpose wx inverse. So that's wk. And then I have my gradient here that I have to apply at k, which is x transpose wk. And then I have y tilde k minus mu tilde k, where, again, the indices-- I mean the superscript k are pretty natural. y tilde k just means that-- so that's just yi. So that's just yi times g prime of mu i k. And mu tilde k is, if I look at the i coordinate, it's just going to be mu i times g prime of mu i. OK? So I just add superscripts k to everything. So I know that those things get updated real time, right? Every time I make one iteration, I get a new value for beta, I get a new value for mu, and therefore, I get a new value for w. Yes? AUDIENCE: [INAUDIBLE] the Fisher equation [INAUDIBLE]?? PHILIPPE RIGOLLET: Yeah, that's a good point. So that's definitely a plus, because this is a positive, semi-definite matrix. So this is a plus. And well, that's probably where I erased it. OK. Let's see where I made my mistake. So there should be a minus here. There should be a minus here. There should be a minus even at the beginning, I believe. So that means that what is my-- oh, yeah, yeah. So you see, when we go back to the first, so what I erased was basically this thing here, yi minus mu i. And when I took the first derivative-- so it was the derivative with respect to H prime. So the derivative with respect to the second term-- I mean, the derivative of the second term was actually killed, because we took the expectation of this guy. But when we took the derivative of the first term, which is the only one that stayed, this guy went away. But there was a negative sign from this guy, because that's the thing we took the negative off. So it's really, when I take my second derivative, I should carry out the minus signs everywhere. OK? So it's just I forget this minus throughout. You see the first term went away, on the first line there. The first term went away, because the conditional expectation of yi, given xi 0. And then I had this minus sign in front of everyone, and I forgot it. All right. Any other mistake that I made? We're good? All right. So now, this is what we have, that xk-- sorry, that beta k plus 1 is equal to beta k plus this thing. OK? And if you look at this thing, it sort of reminds us of something. Remember the least squares estimator. So here, I'm going to actually deviate slightly from the slides. And I will tell you how. The slides take beta k and put it in here, which is one way to go. And just think of this as a big least square solution. Or you can keep the beta k, solve another least squares, and then add it to the beta k that you have. It's the same thing. So I will take the different routes. So you have the two options, all right? OK. So when we did the least squares-- so parenthesis least squares-- we had y equals x beta plus epsilon. And our estimator beta hat was x transpose x inverse x transpose y, right? And that was just solving the first order condition, and that's what we found. Now look at this-- x transpose bleep x inverse, x transpose bleep something. OK? So this looks like, if this is the same as the left board, if wk is equal to the identity matrix, meaning we don't see it, and y is equal to y tilde k minus mu tilde k-- so those similarities, the fact that we just squeeze in-- so the fact that the response variable is different is really not a problem. We just have to pretend that this is equal to y tilde minus mu tilde. I mean, that's just the least squares. When you call a software that does least squares for you, you just tell it what y is, you tell it with x is, and it makes the computation. So you would just lie to it and say all the actual y I want is this thing. And then we need to somehow incorporate those weights. And so the question is, is that easy to do? And the answer is yes, because this is a setup where this would actually arise. So one of the things that's very specific to what we did here and with least squares, we assume that epsilon, when we did at least the inference, we assumed that epsilon was normal 0 and the covariance matrix was the identity, right? What if the covariance matrix is not the identity? If the covariance matrix is not the identity, then your maximum likelihood is not exactly these least squares. If the covariance matrix is any matrix you have another solution, which involves the inverse of the covariance matrix that you have, but if your covariance matrix, in particular, is diagonal-- which would mean that each observation that you get in this system of equations is still independent, but the variances can change from one line to another, from one observation to another-- then it's called heteroscedastic. "Hetero" means "not the same." "Scedastic" is "scale." And a heteroscedastic case, you would have something slightly different. And it makes sense that, if you know that some observations have much less variance than others, you might want to give them more weight. OK? So if you think about your usual drawing, and maybe you have something like this, but the actual line is really-- OK, let's say you have this guy as well, so just a few here. If you start drawing this thing, if you do least squares, you're going to see something that looks like this on those points. But now, if I tell you that, on this side, the variance is equal to 100, meaning that those points are actually really far from the true one, and here on this side, the variance is equal to 1, meaning that those points are actually close to the line you're looking for, then the line you should be fitting is probably this guy, meaning do not trust the guys that have a lot of variance. And so you need somehow to incorporate that. If you know that those things have much more variance than these guys, you want to weight this. And the way you do it is by using weighted least squares. OK. So we're going to open in parentheses on weighted least squares. It's not a fundamental statistical question, but it's useful for us, because this is exactly what's going to spit out-- something that looks like this with this matrix w in there. OK. So let's go back in time for a second. Assume we're still covering least squares regression. So now, I'm going to assume that y is x beta plus epsilon, but this time, epsilon is a multivariate Gaussian in, say, p dimensions with mean 0. And covariance matrix, I will write it as w inverse, because w is going to be the one that's going to show up. OK? So this is the so-called heteroscedastic. That's how it's spelled, and yet another name that you can pick for your soccer team or a capella group. All right. So the maximum likelihood, in this case-- so actually, let's compute the maximum likelihood for this problem, right? So the log likelihood is what? Well, we're going to have the term that tells us that it's going to be-- so OK. What is the density of a multivariate Gaussian? So it's going to be a multivariate Gaussian in p dimension with mean x beta and covariance matrix w inverse, right? So that's the density that we want. Well, it's of the form 1 over determinant of w inverse times 2 pi to the p/2. OK? And times exponential, and now, what I have is x minus x beta transpose w-- so that's the inverse of w inverse-- x minus x beta divided by 2. OK? So this is x minus mu transpose sigma inverse x minus mu divided by 2. And if you want a sanity check, just assume that sigma-- yeah? AUDIENCE: Is it x minus x beta or y? PHILIPPE RIGOLLET: Well, you know, if you want this to be y, then this is y, right? Sure. Yeah, maybe it's less confusing. So if you should do p is equal to 1, then what does it mean? It means that you have this mean here. So let's forget about what it is. But this guy is going to be just 1 sigma squared, right? So what you see here is the inverse of sigma squared. So that's going to be 2 over 2 sigma squared, like we usually see it. The determinant of w inverse is just the product of the entry of the 1 by 1 matrix, which is just sigma square. OK? So that should be actually-- yeah, no, that's actually-- yeah, that's sigma square. And then I have this 2 pi. So square root of this, because p is equal to 1, I get sigma square root 2 pi, which is the normalization that I get. This is not going to matter, because, when I look at the log likelihood as a function of beta-- so I'm assuming that w is known-- what I get is something which is a constant. So it's minus p minus n times p/2 times log that w inverse times 2 pi. OK? So this is just going to be a constant. It won't matter when I do the maximum likelihood. And then I'm going to have what? I'm going to have plus 1/2 of y minus x beta transpose w y minus x beta. So if I want to take the maximum of this guy-- sorry, there's a minus here. So if I want to take the maximum of this guy, I'm going to have to take the minimum of this thing. And the minimum of this thing, if you take the derivative, you get to see-- so that's what we have, right? We need to compute the minimum of y minus x beta transpose w minus y minus x beta. And the solution that you get-- I mean, you can actually check this for yourself. The way you can see this is by doing the following. If you're lazy and you don't want to redo the entire thing-- maybe I should keep that guy. W is diagonal, right? I'm going to assume that so w inverse is diagonal, and I'm going to assume that no variance is equal to 0 and no variance is equal to infinity, so that both w inverse and w have only positive entries on the diagonal. All right? So in particular, I can talk about the square root of w, which is just the matrix, the diagonal matrix, with the square roots on the diagonal. OK? And so I want to minimize in beta y minus x beta transpose w y minus x beta. So I'm going to write w as square root of w times square root of w, which I can, because w-- and it's just the simplest thing, right? If w is w1 wn, so that's my w, then the square root of w is just square root of w1 square root of wn, and then 0 is elsewhere. OK? So the product of those two matrices gives me definitely back what I want, and that's the usual matrix product. Now, what I'm going to do is I'm going to push one on one side and push the other one on the other side. So that gives me that this is really the minimum over beta of-- well, here I have this transposed, so I have to put it on the other side. w is clearly symmetric and so is square root of w. So the transpose doesn't matter. And so what I'm left with is square root of wy minus square root of wx beta transpose, and then times itself. So that's square root wy minus square root w-- oh, I don't have enough space-- x beta. OK, and that stops here. But this is the same thing that we've been doing before. This is a new y. Let's call it y prime. This is a new x. Let's call it x prime. And now, this is just the least squares estimator associated to a response y prime and a design matrix x prime. So I know that the solution is x prime transpose x prime inverse x prime transpose y prime. And now, I'm just going to substitute again what my x prime is in terms of x and what my y prime is in terms of y. And that gives me exactly x square root w square root w x inverse. And then I have x transpose square root w for this guy. And then I have square root wy for that guy. And that's exactly what I wanted. I'm left with x transpose wx inverse x transpose wy. OK? So that's a simple way to take into account the w that we had before. And you could actually do it with any matrix that's positive semi-definite, because you can actually talk about the square root of those matrices. And it's just the square root of a matrix is just a matrix such that, when you multiply it by itself, it gives you the original matrix. OK? So here, that was just a shortcut that consisted in saying, OK, maybe I don't want to recompute the gradient of this quantity, set it equal to 0, and see what beta hat had should be. Instead, I am going to assume that I already know that, if I did not have the w, I would know how to solve it. And that's exactly what I did. I said, well, I know that this is the minimum of something that looks like this, when I have the primes. And then I just substitute back my w in there. All right. So that' just the lazy computation. But again, if you don't like it, you can always take the gradient of this guy. Yes? AUDIENCE: Why is the solution written in the slides different? PHILIPPE RIGOLLET: Because there's a mistake. Yeah, there's a mistake on the slides. How did I make that one? I'm actually trying to parse it back. I mean, it's clearly wrong, right? Oh, no, it's not. No, it is. So it's not clearly wrong. Actually, it is clearly wrong. Because if I put the identity here, those are still associative, right? So this product is actually not compatible. So it's wrong, but there's just this extra thing that I probably copy-pasted from some place. Since this is one of my latest slide, I'll just color it in white. But yeah, sorry, there's a mis-- this parenthesis is not here. Thank you. AUDIENCE: [INAUDIBLE]. PHILIPPE RIGOLLET: Yeah. OK? AUDIENCE: So why not square root [INAUDIBLE]?? PHILIPPE RIGOLLET: Because I have two of them. I have one that comes from the x prime that's here, this guy. And then I have one that comes from this guy here. OK, so the solution-- let's write it in some place that's actually legible-- which is the correction for this thing is x transpose wx inverse x transpose wy. OK? So you just squeeze in this w in there. And that's exactly what we had before, x transpose wx inverse x transpose w some y. OK? And what I claim is that this is routinely implemented. As you can imagine, heteroscedastic linear regression is something that's very common. So every time you a least squares formula, you also have a way to put in some weights. You don't have to put diagonal weights, but here, that's all we need. So here on the slides, again, I took the beta k, and I put it in there, so that I have only one least square solution to formulate. But let's do it slightly differently. What I'm going to do here now is I'm going to say, OK, let's feed it to some least squares. So let's do weighted least squares on a response, y being y tilde k minus mu tilde k, and design matrix being, well, just the x itself. So that doesn't change. And the weights-- so the weights are what? The weights are the wk that I had here. So wki is h prime of xi transpose beta k divided by g prime of mu i at time k times phi. OK, and so this, if I solve it, will spit out something that I will call a solution. I will call it u hat k plus 1. And to get beta hat k plus 1, all I need to do is to do beta k plus u hat k plus 1-- sorry, beta-- yeah. OK? And that's because-- so here, that's not clear. But I started from there, remember? I started from this guy here. So I'm just solving a least squares, a weighted least square that's going to give me this thing. That's what I called u hat k plus 1. And then I add it to beta k, and that gives me beta k minus 1. So I just have this intermediate step, which is removed in the slides. OK? So then you can repeat until convergence. What does it mean to repeat until convergence? AUDIENCE: [INAUDIBLE]? PHILIPPE RIGOLLET: Yeah, exactly. So you just set some threshold and you say, I promise you that this will converge, right? So you know that at some point, you're going to be there. You're going to go there, but you're never going to be exactly there. And so you just say, OK, I want this accuracy on my data. Actually, the machine is a little strong. Especially if you have 10 observations to start with, you know you're going to have something that's going to have some statistical error. So that should actually guide you into what kind of error you want to be making. So for example, a good rule of thumb is that if you have n observations, you just take some within-- if you want the L2 distance between the beta-- the two consecutive beta to be less than 1/n, you should be good enough. It doesn't have to be that machine precision. And so it's clear how we do this, right? So here, I just have to maintain a bunch of things, right? So remember, when I want to recompute-- at every step, I have to recompute a bunch of things. So I have to recompute the weights. But if I want to recompute the weights, not only do I need to previous iterate, but I need to know how the previous iterate impacts my means. So at each step, I have to recalculate mu i k by doing g prime, rate? Remember mu i k was just g inverse of xi transpose beta k, right? So I have to recompute that. And then I use this to compute my weights. I also use this to compute my y, right? so my y depends also on g prime of mu i k. I feed that to my weighted least squares engine. It spits out the u hat k, that I add to my previous beta k. And that gives me my new beta k plus 1. OK. So here's the pseudocode, if you want to take some time to parse it. All right. So here again, the trick is not much. It's just saying, if you don't feel like implementing Fisher scoring or inverting your Hessian at every step, then a weighted least squares is actually going to do it for you automatically. All right. Then that's just a numerical trick. There's nothing really statistical about this, except the fact that this calls for a solution for each of the step reminded us of sum of the squares, except that there was some extra weights. OK. So to conclude, we'll need to know, of course, xy, the link function. Why do we need the variance function? I'm not sure we actually need the variance function. No, I don't know why I say that. You need phi, not the variance function. So where do you start actually, right? So clearly, if you start very close to your solution, you're actually going to do much better. And one good way to start-- so for the beta itself, it's not clear what it's going to be. But you can actually get a good idea of what beta is by just having a good idea of what mu is. Because mu is g inverse of xi transpose beta. And so what you could do is to try to set mu to be the actual observations that you have, because that's the best guess that you have for their expected value. And then you just say, OK, once I have my mu, I know that my mu is a function of this thing. So I can write g of mu and solve it, using your least squares estimator, right? So g of mu is of the form x beta. So you just solve for-- once you have your mu, you pass it through g, and then you solve for the beta that you want. And then that's the beta that you initialize with. OK? And actually, this was your question from last time. As soon as I use the canonical link, Fisher scoring and Newton-Raphson are the same thing, because the Hessian is actually deterministic in that case, just because when you use the canonical link, H is the identity, which means that its second derivative is equal to 0. So this term goes away even without taking the expectation. So remember, the term that went away was of the form yi minus mu i divided by phi times h prime prime of xi transpose beta, right? That's the term that we said, oh, the conditional expectation of this guy is 0. But if h prime prime is already equal to 0, then there's nothing that changes. There's nothing that goes away. It was already equal to 0. And that always happens when you have the canonical link, because h is g b prime inverse. And the canonical link is b prime inverse, so this thing is the identity. So the second derivative of f of x is equal to x is 0. OK. My screen says end of show. So we can start with some questions. AUDIENCE: I just wanted to clarify. So iterative-- what is it say for iterative-- PHILIPPE RIGOLLET: Reweighted least squares. AUDIENCE: Reweighted least squares is an implementation of the Fisher scoring [INAUDIBLE]?? PHILIPPE RIGOLLET: That's an implementation that's just making calls to weighted least squares oracles. It's called an oracle sometimes. An oracle is what you assume the machine can do easily for you. So if you assume that your machine is very good at multiplying by the inverse of a matrix, you might as well just do Fisher scoring yourself, right? It's just a way so that you don't have to actually do it. And usually, those things are implemented-- and I just said routinely-- in statistical software. But they're implemented very efficiently in statistical software. So this is going to be one of the fastest ways you're going to have to solve, to do this step, especially for large-scale problems. AUDIENCE: So the thing that computers can do well is the multiplier [INAUDIBLE]. What's the thing that the computers can do fast and what's the thing that [INAUDIBLE]?? PHILIPPE RIGOLLET: So if you were to do this in the simplest possible way, your iterations for, say, Fisher scoring is just multiply by the inverse of the Fisher information, right? AUDIENCE: So finding that inverse is slow? PHILIPPE RIGOLLET: Yeah, so it takes a bit of time. Whereas, since you know you're going to multiply directly by something, if you just say-- those things are not as optimized as solving least squares. Actually, the way it's typically done is by doing some least squares. So you might as well just do least squares that you like. And there's also less-- well, no, there's no-- well, there is less recalculation, right? Here, your Fisher, you would have to recompute the entire matrix of Fisher information. Whereas here, you don't have to. Right? You really just have to compute some vectors and the vector of weights, right? So the Fisher information matrix has, say, n choose two entries that you need to compute, right? It's symmetric, so it's order n squared entries. But here, the only things you update, if you think about it, are this weight matrix. So there is only the diagonal elements that you need to update, and these vectors in there also. There's two inverses n squared. So that's much less thing to actually put in there. It does it for you somehow. Any other question? Yeah? AUDIENCE: So if I have a data set [INAUDIBLE],, then I can always try to model it with least squares, right? PHILIPPE RIGOLLET: Yeah, you can. AUDIENCE: And so this is like setting my weight equal to 1-- the identity, essentially, right? PHILIPPE RIGOLLET: Well, not exactly, because the g also shows up in this correction that you have here, right? AUDIENCE: Yeah. PHILIPPE RIGOLLET: I mean, I don't know what you mean by-- AUDIENCE: I'm just trying to say, are there ever situations where I'm trying to model a data set and I would want to pick my weights in a particular way? PHILIPPE RIGOLLET: Yeah. AUDIENCE: OK. PHILIPPE RIGOLLET: I mean-- AUDIENCE: [INAUDIBLE] example [INAUDIBLE].. PHILIPPE RIGOLLET: Well, OK, there's the heteroscedastic case for sure. So if you're going to actually compute those things-- and more generally, I don't think you should think of those as being weights. You should really think of those as being matrices that you invert. And don't think of it as being diagonal, but really think of them as being full matrices. So if you have-- when we wrote weighted least squares here, this was really-- the w, I said, is diagonal. But all the computations really never really use the fact that it's diagonal. So what shows up here is just the inverse of your covariance matrix. And so if you have data that's correlated, this is where it's going to show up.
MIT_18650_Statistics_for_Applications_Fall_2016
7_Parametric_Hypothesis_Testing.txt
PROFESSOR: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. So welcome back. So we are now moving to a new chapter, which is going to have a little more of a statistical flavor when it comes to designing methods, all right? Because if you think about it, OK-- some of you have probably attempted problem number two in the problem set. And you realize that maximum likelihood estimators does not give you super trivial estimators, right? I mean, when you have an n theta theta, then the thing you get is not something you could have guessed before you actually attempted to solve that problem. And so, in a way, we've seen already sophisticated methods. However, in many instances, the maximum likelihood estimator was just an average. And in a way, even if we had this confirmation for maximum likelihood that indeed that was the estimator that maximum likelihood would spit out, and that our intuition was therefore pretty good, most of the statistical analysis or use of the central limit theorems, all these things actually did not come in the building of estimator, in the design of the estimator, but really in the analysis of the estimator. And you could say, well, if I know already that the best estimator is the average, I'm just going to use the average. I don't have to, basically, quantify how good it is. I just know it's the best I can do. We're going to talk about tests. And we're going to talk about parametric hypothesis testing. So you should view this as-- parametric means, well, it's about a parameter, like we did before. And hypothesis testing is on the same level as estimation. And on the same level as estimator will be the word "test," OK? And when we're going to devise a test, we're going to actually need to understand random fluctuations that arise from the central limit theorem better, OK? It's not just going to be in the analysis. It's also going to be in the design. And everything we've been doing before in understanding the behavior of an estimator is actually going to come in and be extremely useful in the actual design of tests, OK? So as an example, I want to talk to you about some real data. I will not study this data. But this data actually exist. You can find it on R. And so, it's the data from the so-called credit union Cherry Blossom Run, which is a 10 mile race. It takes place every year in D.C. It seems that some of the years are pretty nice. In 2009, there were about 15,000 participants. Pretty big race. And the average running time was 103.5 minutes, all right? So about an hour and a half or a little bit more. And so, you can ask the following question, right? This is actual data, right? 103.5 actually averaged the running time for all of 15,000. Now, this in practice, may not be something very suitable. And you might want to just sample a few runners and try to understand how they're behaving every year without having to collect the entire data set. And so, you could ask the question, well, let's say my budget is to ask for maybe 10 runners what their running time was. I still want to be able to determine whether they were running faster in 2012 than in 2009. Why do I put 2012, and not 2016? Well, because the data set for 2012 is also available. So if you are interested and you know how to use R, just go and have fun with it. So to answer this question, what we do is we select n runners, right? So n is a moderate number that's more manageable than 15,000. From the 2012 race at random. That's where the random variable is going to come from, right? That's where we actually inject randomness in our problem. So remember this is an experience. So really in a way, the runners are the omegas. And I'm interested in measurements on those guys. So this is how I have a random variable. And this random verbal here is measuring their running time. OK. If you look at the data set, you have all sorts of random variables you could measure about those random runners. Country of origin. I don't know, height, age, a bunch of things. OK. Here, the random variable of interest being the running time. OK. Everybody understand what the process is? OK. So now I'm going to have to make some modeling assumptions. And here, I'm actually pretty lucky. I actually have all the data from a past year. I mean, this is not the data from 2012, which I also have, but I don't use. But I can actually use past data to try to understand what distribution do I have, right? I mean, after all, running time is going to be rounded to something. Maybe I can think of it as a discrete random variable. Maybe I can think of it as the exponential random variable. Those are positive numbers. I mean, there's many kind of running times that could come up to mind. Many kind of distributions I could think of for this modeling part. But it turns out that if you actually plug the histogram of those running times for all 15,000 runners in 2009, you actually are pretty happy to see that it really looks like a bell-shaped curve, which suggest that this should be a Gaussian. So what you go on to do is you estimate the mean from past observations, which was actually 103.5, as we said. You submit the variance, which was 373. And you just try to superimpose the curve with this one, which is a Gaussian PDF with mean 103.5 and variants 373. And you see that they actually look very much alike. And so here, you're pretty comfortable to say that the running time actually is Gaussian distribution. All right? So now I know that the x1 to xn, I'm going to say they're Gaussian, OK? I still need to specify two parameters. So what I want to know is, is the distribution the same from past years, right? So I want to know if the random variable that I'm looking for-- if I, say, pick one. Say, x1. Does it have the same distribution in 2012 that it did in 2009? OK. And so, the question is, is x1 has a Gaussian distribution with mean 103.5 and variance 373? Is that clear? OK. So this question that calls for a yes or no answer is a hypothesis testing problem. I am testing a hypothesis. And this is the basis of basically all of data-driven scientific inquiry. You just ask questions. You formulate a scientific hypothesis. Knocking down this gene is going to cure melanoma, is this true? I'm going to collect. I'm going to try. I'm to observe some patients on which I knock down this gene. I'm going to collect some measurements. And I'm going to try to answer this yes/no question, OK? It's different from the question, what is the mean running time for this year? OK. So this hypothesis testing is testing if this hypothesis is true. The hypothesis in common English we just said, were runners running faster? All right? Anybody could formulate this hypothesis. Now, you go to a statistician. And he's like, oh, what you're really asking me is x1 has a Gaussian distribution with mean less than 103.5 and variance 373, right? That's really the question that you ask in statistical terms. And so, if you're asking if this was the same as before, there's many ways it could not be the same as before. There's basically three ways it could not be the same as before. It could be the case that x1 is in expectation to 103.5 So the expectation has changed. Or the variance has changed. Or the distribution has changed. I mean, who knows? Maybe runners are now just all running holding their hands. And it's like now a point mass at 1 given point. OK. So you never know what could [INAUDIBLE].. Now of course, if you allow for any change, you will find change. And so what you have to do is to factor in as much knowledge as you can. Make as many modeling assumptions, so that you can let the data speak about your particular question. Here, your particular question is, are they running faster? So you're only really asking a question about the expectation. You really want to know if the expectation has changed. So as far as you're concerned, you're happy to make the assumption that the rest has been unchanged. OK. And so, this is the question we're asking. Is the expectation now less than 103.5? Because you specifically asked whether runners were going faster this year, right? They tend to go faster rather than slower, all right? OK. So this is the question we're asking in mathematical terms. So first, when I did that, I need to basically fix the rest. And fixing the rest is actually part of the modeling assumptions. So I fixed my variance to be 373. OK? I assume that the variance has not changed between 2009 and 2012. Now, this is an assumption. It turns out it's wrong. So if you look at the data from 2012, this is not the correct assumption. But I'm just going to make it right now for the sake of argument, OK? And also the fact that it's Gaussian. Now, this is going to be hard to violate, right? I mean, where did this bell-shaped curve come from? Well, it's just natural when you just measure a bunch of things. The central limit theorem appears in the small things of nature. I mean, that's the bedtime story you get about the central limit theorem. And that's why the bell-shaped curve is everywhere in nature. It's the sum of little independent things that are going on. And this Gaussian assumption, even if I wanted to relax it, there's not much else I can do. It is pretty robust across the years. All right. So the only thing that we did not fix is the expectation of x1, which now I want to know what it is. And since I don't know what it is, I'm going to call it mu. And it's going to be a variable of interest, all right? So it's just a number mu. Whatever this is I can try to estimate it, maybe using maximum likelihood estimation. Probably using the average, because this is Gaussian. And we know that the maximum likelihood estimator for a Gaussian is just the average. And now we only want to test if mu is equal to 103.5, like it was in 2009. Or on the contrary, if mu is not equal to 103.5. And more specifically, if mu is actually strictly less than 103.5. That's the question you ask. Now, why am I in writing mu equal to 103.5 or is less than 103.5 and equal to 103.5 versus not equal to 103.5? It's because since you asked me a more precise question, I'm going to be able to give you a more precise answer. And so, if your question is very specific-- are they running faster? I'm going to factor that in what I write. If you just ask me, is it the same? I'm going to have to write, or is it different than 103.5? And that's less information about what you're looking for, OK? So by making all these modeling assumptions-- the fact that the variance doesn't change, the fact that it's still Gaussian-- I've actually reduced the number of. And I put numbers in quotes, because this is still an infinite of them. But I'm limiting the number of ways the hypothesis can be violated. The number of possible alternative realities for this hypothesis, all right? For example, I'm saying there's no way mu can be larger than 103.5. I've already factored that in, OK? It could be. But I'm actually just going to say that if it's larger, all I'm going to be able to tell you is that it's not smaller. I'm not going to be able to tell you that it's actually larger, OK? And the only way it can be rejected now. The only way I can reject my hypothesis is if x belongs to very specific family of distributions. If it has a distribution which is Gaussian with mean mu and variance of 373 for mu, which is less 103.5. All right? So we started with basically was x1-- so that's the reality. x1 follows n 103.5 373, OK? And this is everything else, right? So for example, here is x follows some exponential, 0.1, OK? This is just another distribution here. Those are all the possible distributions. What we said is we said, OK, first of all, let's just keep only those Gaussian distributions, right? And second, we said, well, among those Gaussian distributions, let's only look at those that have-- well, maybe this one should be at the boundary-- let's only look at the Gaussians here. So this guy here are all the Gaussians with mean mu and variance 373 for mu less than 103.5, OK? So when you're going to give me data, I'm going to be able to say, well, am I this guy? Or am I one of those guys? Rather than searching through everything. And the more you search the easier for you to find something that fits better the data, right? And so, if I allow everything possible, then there's going to be something that just by pure randomness is actually going to look better for the data, OK? So for example, if I draw 10 random variables, right? If n is equal to 10. And let's say they take 10 different values. Then it's actually more likely that those guys come from a discrete distribution that takes each of these values with probability 1 over 10, than actually some Gaussian random variable, right? That would be perfect. I can actually explain it. If the 10 numbers I got were say-- let's say I collect 3, 90, 95, and 102. Then the most likely distribution for those guys is the discrete distribution that takes three values, 91 with probability 1/3, 95 with probability 1/3, and 102 with probably 1/3, right? That's definitely the most likely distribution for this. So if I allowed this, I would say, oh no. This is not distributed according to that. It's distributed according to this very specific distribution, which is somewhere in the realm of all possible distributions, OK? So now we're just going to try to carve out all this stuff by making our assumptions. OK. So here in this particular example, just make a mental note that what we're doing is that I actually-- a little birdie told me that the reference number is 103.5, OK? That was the thing I'm actually looking for. In practice, it's actually seldom the case that you have this reference for yourself to think of, right? Maybe here, I just happen to have a full data set of all the runners of 2009. But if I really just asked you, I said, were runners faster in 2012 than in 2009? Here's $10 to perform your statistical analysis. What you're probably going to do is called maybe 10 runners from 2012, maybe 15 runners from 2009, ask them and try to compare their mean. There's no standard reference. You would not be able to come up with this 103.5, because these data maybe is expensive to get or something. OK. So this is really more of the standard case, all right? Where you really compare two things with each other, but there's no actual ground truth number that you're comparing it to. OK. So we'll come back to that in a second. I'll tell you what the other example looks like. So let's just stick to this example. I tell you it's 103.5, OK? Let's try to have our intuition work the same way. We said, well, averages worked well. The average, tell me, of over these 10 guys should tell me what the mean should be. So I can just say, well x bar is going to be close to the true mean by the law of large number. So I'm going to decide whether x bar is less than 103.5. And conclude that in this case, indeed mu is less than 103.5, because those two quantities are close, right? I could do that. The problem is that this could go pretty wrong. Because if n is small, then I know that xn bar is not equal to mu. I know that xn bar is close to mu. But I also know that there's pretty high chance that it's not equal to mu. In particular, I know it's going to be somewhere at 1 over root n away from mu, right? 1 over root n being the root coming from what? CLT, right? That's the root n that comes from CLT. In blunt words, CLT tells me the mean is at distance 1 over root n from the expectation, pretty much. That's what it's telling. So 1 over root n. If I have 10 people in there, 1 over root 10 is not a huge number, right? It's like 1/3 pretty much. So 1/3 103.5. If the true mean was actually 103.4, but my average was telling me it's 103.4 plus 1/3, I would actually come to two different conclusions, right? So let's say that mu is equal to 103.4, OK? So you're not supposed to know this, right? That's the hidden truth. OK. Now I have n is equal to 10. So I know that x bar n minus 103.4 is something of the order of 1 over the square root of 10, which is of the order of, say, 0.3. OK. So here, this is all hand wavy, OK? But that's what the central limit theorem tells me. What it means is that it is possible that x bar n is actually equal to is actually equal to 103.4 plus 0.3, which is equal to 103.7. Which means that while the truth is that mu is less than 103.5, then I would conclude that mu is larger than 103.5, OK? And that's because I have not been very cautious, OK? So what we want to do is to have a little buffer to account for the fact that xn bar is not a precise value for the true mu. It's something that's 1 over root n away from you. And so, what we want is the better heuristic that says, well, if I want to conclude that I'm less than 103.5, maybe I need to be less than 103.5 minus a little buffer that goes to 0 as my sample size goes to infinity. And actually, that's what the law of large number tells me. The central limit theorem actually tells me that this should be true, something that goes to 0 as n goes to infinity and the rate 1 over root n, right? That's basically what the central limit theorem tells me. So to make this intuition more precise, we need to understand those fluctuations. We need to actually put in something that's more precise than these little wiggles here, OK? We need to actually have the central limit theorem come in. So here is the example of comparing two groups. So pharmaceutical companies use hypothesis testing to test if a drug is efficient, right? That's what they do. They want to know, does my new drug work? And that's what the Federal Drug Administration office is doing on a daily basis. They ask for extremely well regulated clinical trials on a thousand people, and check, does this drug make a difference? Did everybody die? Does it make no difference? Should people pay $200 for a pill of sugar, right? So that's what people are actually asking. So to do so, of course, there is no ground truth about-- so there's actually a placebo effect. So it's not like actually giving a drug that does not work is going to have no effect on patients. It will have a small effect, but it's very hard to quantify. We know that it's there, but we don't know what it is. And so rather than saying, oh the ground truth is no improvement, the ground truth is the placebo effect. And we need to measure what the placebo effect is. So what we're going to do is we're going to split our patients into two groups. And there's going to be what's called a test group and a control group. So the word test here is used in a different way than hypothesis testing. So we'll just call it typically the drug group. And so, I will refer to mu drug for this guy, OK? Now, this let's say this is a cough syrup, OK? And when you have a cough syrup, the way you measure the efficacy of a cough syrup is to measure how many times you cough per minute, OK? And so, if I define mu control the number of expectoration per hour. So just the expected number, right? This is the number I don't know, because I don't have access to the entire population of people that will ever take this cough syrup. And so, I will call it mu control for the control group. So those are the people who have been actually given just like sugar, like maple syrup. And mu drug are those people who are given the actual syrup, OK? And you can imagine that maybe maple syrup will have an effect on expectorations per hour just because, well, it's just sweet and it helps, OK? And so, we don't know what this effect is going to be. We just want to measure if the drug is actually having just a better impact on expectorations per hour than the just pure maple syrup, OK? So what we want to know is if mu drug is less than mu control. That would be enough. If we had access to all the populations that will ever take the syrup for all ages, then we would just measure, did this have an impact? And even if it's a slightly ever so small impact, then it's good to release this cough syrup, assuming that it has no side effects or anything like this, because it's just better than maple syrup, OK? The problem is that we don't have access to this. And we're going to have to make this decision based on samples that give me imprecise knowledge about mu drug and mu control. So in this case, unlike the first case where we compared an unknown expected value to have a fixed number, which was one of the 103.5, here, we're just comparing two unknown numbers with each other, OK? So there's two sources of randomness. Trying to estimate the first one. And trying to estimate the second one. Before I move on, I just wanted to tell you I apologize. One of the graders was not able to finish grading his problem sets for today. So for those of you who are here just to pick up their homework, feel free to leave now. Even if you have a name tag, I will pretend I did not read it. OK. So I'm sorry. You'll get it on Tuesday. And this will not happen again. OK. So for the clinical trials, now I'm going to collect information. I'm going to collect the data from the control group. And I'm going to collect data from the test group, all right? So my control group here. I don't have to collect the same number of people in the control group than in the drug group. Actually, for cough syrup, maybe it's not that important. But you can imagine that if you think you have the cure to a really annoying disease, it's actually hard to tell half of the people you will get a pill of nothing, OK? People tend to want to try the drug. They're desperate. And so, you have to have this sort of imbalance between who is getting the drug and who's not getting the drug. And people have to qualify for the clinical trials. There's lots of fluctuations that affect what the final numbers of people who are actually going to get the drug and are going to get the control is going to be. And so, it's not easy for you to make those two numbers equal. You'd like to have those numbers equal if you can, but not necessarily. And by the way, this is all part of some mystical science called "design of experiments." And in particular, you can imagine that if one of the series had higher variants, you would want to like more people in this group than the other group. Yeah? STUDENT: So when we're subtracting [INAUDIBLE] something that [INAUDIBLE] 0 [INAUDIBLE] to be satisfied. So that's on purpose [INAUDIBLE].. PROFESSOR: Yeah, that's on purpose. And I'll come to that in a second, all right? So basically, we're going to make it if your answer is, is this true? We're going to make it as hard as possible, but no harder for you to say yes to this answer. Because, well, we'll see why. OK, so now we have two set of data, the x's and the y's. The x's are the ones for the drug. And the y's are the data that I collected from the people, who were just given a placebo, OK? And so, they're all IID random variables. And here, since it's the number of expectorations, I'm making a blunt modeling assumption. I'm just going to say it's Poisson. And it's characterized only by the mean mu drug or the mean mu control, OK? I've just made an assumption here. It could be something different. But let's say it's a Poisson distribution. So now what I want to know is to test whether mu drug is less than mu control. We said that already. But the way we said it before was not as mathematical as it is now. Now we're actually making a test on the parameters of Poisson distribution. Whereas before, we were just making test on expected numbers, OK? So the heuristic-- again, if we try to apply the heuristic now. Rather than comparing mu x bar drug to some fixed number, I'm actually comparing x bar drug to some control. But now here, I need to have something that accounts for, not only the fluctuations of x bar drug, but also for the fluctuations of x bar control, OK? And so, now I need something that goes to 0 when all those two things go to infinity. And typically, it should go to zero with 1 over root of n drug and 1 over square root of n control, OK? That's what the central limit theorem for both x bar drug and x bar control. Two central limit theorems are actually telling. OK. And then we can conclude that this happens. And as you said, we're trying to make it a bit harder to conclude this. Because let's face it. If we were actually using two simple heuristic, right? For simplicity, right? So I can rewrite x bar drug less than x bar control minus this something that goes to 0. I can write it as x bar drug minus x bar control less than something negative, OK? This little something, OK? So now let's look at those guys. This is the difference of two random variables. From the central limit theorem, they should be approximately Gaussian each. And actually, we're going to think of them as being independent. There's no reason why the people in the control group should have any effect on what's happening to the people in the test group. Those people probably don't even know each other. And so, when I look at this, this should look like n 0 with some mean and some variants, let's say I don't know what it is, OK? The mean I actually know. It's mu drug minus mu control, OK? So if they were to plot the PDF of this guy, it would look like this. I would have something which is centered at mu drug minus mu control. And it would look like this, OK? Now let's say that mu drug is actually equal to mu control. That this pharmaceutical company is a huge scam. And they really are trying to sell bottled corn syrup for $200 a pop, OK? So this is a huge scam. And the true things are actually equal to 0. So this thing is really centered about 0, OK? Now, if were not to do this, then basically, half of the time I would actually come up with a distribution that's above this value. And half of the time I would have something that's below this value, which would mean that half of the scams would actually go through FDA if I did not do this. So what I'm trying to do is to say, well, OK. You have to be here, so that there is actually a very low probability that just by chance you end up being here. And we'll make all the statements extremely precise later on. But I think the drug thing makes it interesting to see why you're making it hard, because You don't want to allow people to sell a thing like that. Before we go more into the statistical thinking associated to tests, let's just see how we would do this quantification, right? I mean after all, this is what we probably are the most comfortable with at this point. So let's just try to understand this. And I'm going to make the statisticians favorite test, which is the thing that obviously you do at home all the time every time you get a new quarter, is testing whether it's a fair coin or not. All right? So this test, of course, exists only in textbooks. And I actually did not write this slide. I was lazy to just replace all this stuff by the Cherry Blossom Run. So you have a coin. Now you have 80 observations, x1 to x80. So n is equal to 80. I have x1, xn, IID, Bernoulli p. And I want to know if I have a fair coin. So in mathematical language, I want to know if p is equal to 1/2. Let's say this is just the heads, OK? And a biased coin? Well, maybe you would potentially be interested whether it's biased one direction or the other. But not being a fair coin is already somewhat of a discovery, OK? And so, you just want to know whether p is equal to 1/2 or p is not equal to 1/2, OK? Now, if I were to apply the very naive first example to not reject this hypothesis. If I run this thing 80 times, I need to see exactly 40 heads and 40 tales. Now this is very unlikely to happen exactly. You're going to have close to 40 heads and close to 40 tails, but how close should those things be? OK? And so, the little something is going to be quantified by exactly this, OK? So now here, let's say that my experiment gave me 54 heads. That's 54? Yeah. Which means that my xn bar is 54 over 80, which is 0.68. All right? So I have this estimator. Looks pretty large, right? It's much larger than 0.5, so it does look like, and my mom would certainly conclude, that this is a biased coin for sure, because she thinks I'm tricky. All right. So the question is, can this be due to chance? Can this be due to chance alone? Like what is the likelihood that a fair coin would actually end up being 54 times on heads rather than 40? OK? And so, what we do is we say, OK, I need to understand, what is the distribution of the number of times it comes on heads? And this is going to be a binomial, but it's a little annoying to play with. So we're going to use the central limit theorem that tells me that xn bar minus p divided by square root of p1 minus p is approximately distributed as an n01. And here, since n is equal to 80, I'm pretty safe that this is actually going to work. And I can actually use [INAUDIBLE],, and put xn bar here. [INAUDIBLE] tells me that this is OK to do. All right. So now I'm actually going to compute this. So here, I know this. This is square root of 80. This is a 0.68. What is this value here? We'll talk about it. Well, we're trying to understand what happens if it is a fair coin, right? So if fair, then p is equal to 0.5, right? So what I want to know is, what is the likelihood that a fair coin would give me 0.68? Let me finish. All right. What is the likelihood that a fair coin will allow me to do this, so I'm actually allowed to plug-in p to be 0.5 here? Now, your question is, why do I not plug-in p to be 0.5? But you can. All right. I just want to make you plug-in p at one specific point, but you're absolutely right. OK. Let's forget about your question for one second. So now I'm going to have to look at xn bar minus 0.5 divided by xn bar 1 minus xn bar. Then this thing is approximately Gaussian and 0,1 if the coin is fair. Otherwise, I'm going to have a mean which is not zero here. If the coin is something else, whatever I get here, right? Let's just write it for one second. Let's do it. So what is the distribution of this if p-- so that's p is equal to 0.5. OK? Now if p is equal to 0.6, then this thing is just, well, I know that this is equal to square root of n xn bar minus 0.6, divided by xn bar 1 minus xn in the bar squared root, plus-- well, now the difference. Is So square root of n, 0.6 minus 0.5, divided by square root of xn bar 1 minus xn bar, right? Now if p is equal to 0.6, then this guy is n 0,1, but this guy is something different. This is just a number that depends on square root of n. It's actually pretty large. So if I want to use the fact that this guy has a normal distribution, I need to plug-in the true value here. Now, the implicit question that I got was the following. It says, well, if you know what p is, then what's actually true is also this. If p is equal to 0.5, then since I know that root n xn bar minus p divided by square root of p 1 minus p is some n 0, 1, it's also true that square root of n xn bar minus 0.5 divided by square root of 0.5 1 minus 0.5 is n 0,1, right? I know what p is. I'm just going to make it appear. OK. And so, what's actually nice about this particular [INAUDIBLE] experiment is that I can check if my assumption is valid by checking whether I'm actually-- so what I'm going to do right now is check whether this is likely to be a Gaussian or not, right? And there's two ways I can violate it. By violating mean, but also by violating the variance. And here, what I did in the first case, I said, well I'm not allowing you to check whether you've violated the variance. I'm just plugging whatever variance you're getting. Whereas here, I'm saying, well, there's two ways you can violate it. And I'm just going to factor everything in. So now I can plug-in this number. So this is 80. This is 0.68. So I can compute all this stuff. I can compute all this stuff here as well. And what I get in this case, if I put the xn bar 1, I get 3.45, OK? And now I claim that this makes it reasonable to reject the hypothesis that p is equal to 0.5. Can somebody tell me why? STUDENT: It's pretty big. PROFESSOR: Yeah, 3 is pretty big. So it's very unlikely. So this number that I should see should look like the number I would get if I asked a computer to draw one random Gaussian for me. This number, when I draw one random Gaussian, is actually a number with 99.9% this number will be between negative 3 and 3. With 78% it's going to be between negative 2 and 2. 68% is between minus 1 and 1. And with like 90% it's between minus 2 and 2. So getting a 3.45 when you do this is extremely unlikely to happen, which means that you would have to be extremely unlucky for this to ever happen. Now, it can happen, right? It could be the case that you flip 80 coins and 80 of them are heads. With what probability does this happen? 1 over 2 to the 80, right? Which is probably better off playing the lottery with this kind of odds, right? I mean, this is just not going to happen, but it might happen. So we cannot remove completely the uncertainty, right? It's still possible that this is due to noise. But we're just trying to make all the cases that are very unlikely go away, OK? And so, now I claim that 3.45 is very unlikely for a Gaussian. So if I were to draw the PDF of a standard Gaussian, right? So n 0, 1, right? So that's PDF of n 0, 1. 3.73 is basically here, OK? So it's just too far in the tails. Understood? Now I cannot say that the probability that the Gaussian is equal to 373 is small, right? I just cannot say that, because it's 0. And it's also 0 for the probability that it's 0, even though the most likely values are around 0. It's the continuous random variable. Any value you give me, it's going to happen with probability zero. So what we're going to say is, well, the fluctuations are larger than this number. The probability that I get anything worse than this is actually extremely small, right? Anything worse than this is just like farther than 3.73. And this is going to be what we control. All right? So in this case, I claim that it's quite reasonable to reject the hypothesis. Is everybody OK with this? Everybody find this shocking? Or everybody has no idea what's going on? Do you have any questions? Yeah? STUDENT: Regarding the case of p, where minus p isn't close to xn. If you use 1 minus p as 0.5, then you're dividing by a larger number than you would if you used xn. So it feels like our true number is not 3.45. It's something a little bit smaller than 3.45 for the distribution to actually be like 1/2. Because it seems like we're adding an unnecessary extra error by using xn bar. And we're adding an error that makes it seem that our result was less likely than it actually was. PROFESSOR: That's correct. And you're right. I didn't want to plug-in the p everywhere, but you should plug it in everywhere you can. That's for sure, OK? So let's agree on that. And that's true that it makes the number a little bigger. You compute how much you would get, we would get if we 0.5 there. Well, I don't know what the square root of 80 is. Can somebody compute quickly? I'm not asking you to do it. But what I want is two times square root of 80 times 0.18. 3.22 OK. I can make the same cartoon picture with 3.22. But you're right. This is definitely more accurate. And I should have done this. I didn't want to get the confused message, OK? All right. So now here's a second example that you can think of. So now I toss it 30 times. Still in the realm of the central limit theorem. I get 13 heads rather than 15. So I'm actually much closer to being exactly at half. So let's see if this is actually going to give me a plausible value. So I get 0.33 in average. If the truth was 0.5, I would get something like 0.77. And now I claim that 0.77 is a plausible realization for some standard Gaussian, OK? Now, 0.77 is going to look like it's here. So that could very well be something that just comes because of randomness. And again, if you think about it. If I told you, you were expecting 15, you saw 13, you're happy to put that on the account of randomness. Now of course, the question is going to be, where do I draw the line? Right? Is 12 the right number? Is 11? Is 10? What is it? So basically, the answer is it's whatever you want to be. The problem it's hard to think on the scale, right? What does it mean to think on the scale? If I can't think in this scale, I'm going to have to think on the scale of 80 of them. I'm going to have to think on the scale of running 100 coin flips. And so, this scale is a moving target all the time. Every time you have a new problem, you have to have a new skill in mind. And it's very difficult. The purpose of statistical analysis, and in particular this process that content that takes your x bar and turns it into something that should be standard Gaussian, allows you to map the value of x bar into a scale that is the standard scale of the Gaussian. All right? Now, all you need to have in mind is, what is a large number or an unusually large number for a Gaussian? That's all you need to know. So here, by the way, 0.77 is not this one, because it was actually negative 0.77. So this one. OK. So I can be on the right or I can be on the left of zero. But they are still plausible. So understand you could actually have in mind all the values that are plausible for a Gaussian and those that are not plausible, and draw the line based on what you think is the right number. So how large should a positive value of a Gaussian to become unreasonable for you? Is it 1? Is it 1.5? Is it 2? Stop me when I get there. Is it 2.5? Is it 3? STUDENT: I think 2.5 is definitely too big. PROFESSOR: What? STUDENT: Doesn't it depend on our prior? Let's say we already have really good evidence at this point [INAUDIBLE] PROFESSOR: Yeah, so this is not Bayesian statistics. So there's no such thing as a prior right now. We'll get there. You'll have your moment during one short chapter. So there's no prior here, right? It's really a matter of whether you think is a Gaussian large or not. It's not a matter of coins. It's not a matter of anything. Now I've just reduced it to just one question. So forget about everything we just said. And I'm asking you, when do you decide that a number is too large to be reasonably drawn from a Gaussian? And this number is 2 or 1.96. And that's basically the number that you get from this quintel. We've seen the 1.96 before, right? It's actually q alpha over 2, where alpha is equal to 5%. That's a quintel of a Gaussian. So actually, what we do is we map it again. So are now at the Gaussians. And then we map it again into some probabilities, which is the probability of being farther than this thing. And now probabilities, we can think. Probability is something that quantifies my error. And the question is what percentage of error am I willing to tolerate. And if I tell you 5%, that's something you can really envision. What it means is that if I were to do this test a million times, 5% of the time I would expose myself to making a mistake. All right. That's all it would say. If you said, well, I don't want to account for 5%, maybe I want 1%, then you have to move from 1.94 to 2.5. And then if you say at I want 0.01%, then you have to move to an even larger number. So it depends. But stating this number 1%, 5%, 10% is much easier than seeing those numbers 1.96, 2.5, et cetera. So we're just putting everything back on the scale. All right. To conclude, this, again, as we said, does not suggest that the coin is unfair. Now, it might be that the coin is unfair. We just don't have enough evidence to say that. And that goes back to your question about, why are we siding with the fact that we're making it harder to conclude that the runners were faster? And this is the same thing. We're making it harder to conclude that the coin is biased. Because there is a status quo. And we're trying to see if we have evidence against the status quo. The status quo for the runners is they ran the same speed. The status quo for the coin, we can probably all agree is that the coin is fair. The status quo for a drug? I mean, again, unless you prove me that you're actually not a scammer is that the status quo is that this is maple syrup. There's nothing in there. Why would you? I mean, if I let you get away with it, you would put corn syrup. It's cheaper. OK. So now let's move on to math. All right. So when I started doing mathematics, I'm going to have to talk about random variables and statistical models. And here, there is actually a very simple thing, which actually goes back to this picture. A test is really asking me if my parameter is in some region of the parameter set or another region of the parameter set, right? Yes/no. And so, what I'm going to be given is a sample, x1, xn. I have a model. And again, those can be braces depending on the day. And so, now I'm going to give myself theta 0 and theta 1 to this joint subset. OK. So capital theta here is the space in which my parameter can live. To make two disjoint subsets, I could just split this guy in half, right? I'm going to say, well, maybe it's this guy and this guy. OK. So this is theta 0. And this is theta 1. What it means when I split those two guys, in test, I'm actually going to focus only on theta 0 or theta 1. And so, it means that a priori I've already removed all the possibilities of theta being in this region. What does it mean? Go back to the example of runners. This region here for the Cherry Blossom Run is the set of parameters, where mu was larger than 103.5, right? We removed that. We didn't even consider this possibility. We said either it's less-- sorry. That's mu equal to 103.5. And this was mu less than 103.5, OK? But these guys were like if it happens, it happens. I'm not making any statement about that case. All right? So now I take those two subsets. And now I'm going to give them two different names, because they're going to have an asymmetric role. h0 is the null hypothesis. And h1 is the alternative hypothesis. h0 is the status quo. h1 is what is considered typically as scientific discovery. So if you're a regulator, you're going to push towards h0. If you're a scientist, you're going to push towards h1. If you're a pharmaceutical company, you're going to push towards h1. OK? And so, depending on whether you want to be conservative-- oh, I can find evidence in a lot of data. As soon as you give me three data points, I'm going to be able to find evidence. That means I'm going to tend to say, oh, it's h1. But if you say you need a lot of data before you can actually move away from the status quo, that's age h0, OK? So think of h0 as being status quo, h1 being some discovery that goes against the status quo. All right? So if we believe that the truth theta is either in one of those, what we say is we want to test h0 against h1. OK. This is actually wording. So remember, because this is how your questions are going to be formulated. And this is how you want to probably communicate as a statistician. So you're going to say I have the null and I have an alternative. I want to test h0 against h1. I want to test the null hypothesis against the alternative hypothesis, OK? Now, the two hypotheses I forgot to say are actually this. h0 is that the theta belongs to theta 0. And h1 is that it theta belongs to theta 1. OK. So here, for example, theta was mu. And that was mu equal to 103.5. And this was mu less than 103.5. OK? So typically, they're not going to look like thetas and things like that. They're going to look like very simple things, where you take your usual notation for your usual parameter and you just say in mathematical terms what relationship this should be satisfying, right? For example, in the drug example, that would be mu drug is equal to mu control. And here, that would be mu drug less than mu control. The number of expectorations for people who take the drug for the cough syrup is less than the number of expectoration of people who take the corn syrup, OK? So now what we want to do. We've set up our hypothesis testing problem. You're a scientist. You've set up your problem. Now what you're going to do is collect data. And what you're going to try to find on this data is evidence against h0. And the alternative is going to guide you into which direction you should be looking for evidence against this guy. All right? And so, of course, the narrower the alternative, the easier it is for you, because you just have to look at the one possible candidate, right? But typically, h1 is a big group, like less than. Nobody tells you it's either it's 103.5 and 103. People tell you it's either 103.5 or less than 103.5. OK. And so, what we want to do is to decide whether we reject h0. So we look for evidence against h0 in the data, OK? So as I said, h0 and h1 do not play a symmetric role. It's very important to know which one you're going to place as h0 and which one you're going to place at h1. If it's a close call, you're always going to side with h0, OK? So you have to be careful about those. You have to keep that in mind that if it's a close call, if data does not carry a lot of evidence, you're going to side with h0. And so, you're actually never saying that h0 is true. You're just saying I did not find evidence against h0. You don't say I accept that h0. You say I failed to reject h0. OK. And so one of the things that you want to keep in mind when you're doing this is this innocent until proven guilty. So if you come from a country, like America, there's such a thing. And in particular, lack of evidence does not mean that you are not guilty, all right? OJ Simpson was found not guilty. It was not found innocent, OK? And so, this is basically what happens is like the prosecutor brings their evidence. And then the jury has to decide whether they were convinced that this person was guilty of anything. And the question is, do you have enough evidence? But if you don't have evidence, it's not the burden of the defender to prove that they're innocent. Nobody's proving their innocent. I mean, sometimes it helps. But you just have to make sure that there's not enough evidence against you, OK? And that's basically what it's doing. You're h0 until proven h1. So how are we going to do this? Well, as I said, the role of estimators in hypothesis testing is played by something called tests. And a test is a statistic. Can somebody remind me what a statistic is? Yep? STUDENT: The measure [INAUDIBLE] PROFESSOR: Yeah, that's actually just one step more. So it's a function of the observations. And we require it to be measurable. And as a rule of thumb, measurable means if I give you data, you can actually compute it, OK? If you don't see a [INAUDIBLE] or an [INAUDIBLE],, you don't have to think about it. All right. And so, what we do is we just have this test. But now I'm actually asking only from this test a yes/no answer, which I can code as 0, 1, right? So as a rule of thumb, you say that, well, the test is equal to 0 then h0. The test is equal to 1 at h1. And as we said, is that if the test is equal to 0, it doesn't mean that a 0 is truth. It means that I feel to rejected h0. And if the test is equal to 1, I reject h0. So I have two possibilities. I look at my data. I turn it into a yes/no answer. And yes/no answer is really h0 or h1, OK? Which one is the most likely basically. All right. So in the coin flip example, our test statistic is actually something that takes value 0, 1. And anything, any function that takes value at 0, 1 is an indicator function, OK? So an indicator function is just a function. So there's many ways you can write it. So it's a 1 with a double bar. If you aren't comfortable with this, it's totally OK to write i of something, like i of a. OK. And that's what? So a, here, is a statement, like an inequality, an equality, some mathematical statement, OK? Or not mathematical. I mean, "a" can be, you know, my grandma is 20 years old, OK? And so, this is basically 1 if a is true, and 0 if a is false. That's the way you want to think about it. This function takes only two values, and that's it. So here's the example that we had. We looked at whether the standardized xn bar, the one that actually is approximately n 0,1 was larger than something in absolute value, either very large or very small, but negative. I'm going back to this picture. We wanted to know if this guy was either to the left of something or to the right of something, right? Was it in these regions? Now this indicator, I can view this as a function of x bar. What it does, it really splits the possible values of x bar, which is just a real number, right? In two groups. The groups on which they lead to a value, which is 1. And the groups on which they lead to value, which is 0, right? So what it does is that I can actually think of it as the real line, x bar. And there's basically some values here, where I'm going to get a 1. Maybe I'm going to get a 0 here. Maybe I'm going to get a 0. Maybe I'm going to get a 1. I'm just splitting all possible values of x bar. And I see whether to spit out the side which is 0 or which is 1. In this case, it's not clear, right? I mean, the function is very nonlinear. It's x bar minus 0.5 divided by the square root of x bar 1 minus x bar. If we put the p in the denominator, that would be clear. That would just be exactly something that looks like this. The function would be like this. It would be 1 if it's smaller than some value. Less than 0 if it's in between two values. And then 1 again. So that's psi, OK? So this is 1, right? This is 1. And this is 0. So if x bar is too small or if x bar is too large, then I'm getting a value 1. But if it's somewhere in between, I'm getting a value 0. Now, if I have this weird function, it's not clear how this happened. So the picture here that I get is that I have a weird non-linear function, right? So that's x bar. That's square root of n x bar n 0.5 divided by the square root of x bar n 1 minus x bar n, right? That's this function. A priori, I have no idea what this function looks like. We can probably analyze this function, but let's pretend we don't know. So it's like some crazy stuff like this. And all I'm asking is whether in absolute value it's larger than c, which means that is this function larger than c or less than minus c? The intervals on which I'm going to say 1 are this guy, this guy, this guy, and this guy. OK. And everywhere else, I'm seeing 0. Everybody agree with this? This is what I'm doing. Now of course, it's probably easier for you to just package it into this nice thing that's just either larger than c, an absolute value, or less Than C. I want to have to plot this function. In practice, you don't have to. Now, this is where I am actually claiming. So here, I actually defined to you a test. And I promised, starting this lecture, by saying, oh, now we're going to do something better than computing the averages. Now I'm telling you it's just computing an average. And the thing is the test is not just the specification of this x bar. It's also the specification of this constant c. All right? And the constant c was exactly where our belief about what a large value for a Gaussian is. That's exactly where it came in. So this choice of c is basically a threshold at which we decide above this threshold this isn't likely to come from a Gaussian. Below this threshold we decide that it's likely to come from a Gaussian. So we have to choose what this threshold is based on what we think likely means. Just a little bit more of those things. So now we're going to have to characterize what makes a good test, right? Well, I'll come back to it in a second. But you could have a test that says reject all the time. And that's going to be bad test, right? The FDA is not implementing a test that says, yes all drugs work, now let's just go to Aruba, OK? So people are trying to have something that tries to work all the time. Now FDA's not either saying, let's just say that no drugs work, and let's go to Aruba, all right? They're just trying to say the right thing as often as possible. And so, we're going to have to measure this. So the things that are associated to a test are the rejection region. And if you look at this x in en, such that psi of x is equal to 1, this is exactly this guy that I drew. So here, I summarized the values of the sample into their average. But the values of the sample that I collect will lead to a test that says 1. All right? So this is the rejection region. If I collect a data point, technically I have-- so I have e to the n, which is a big space like this. So that's e to the n. Think of it as being the space of xn bars. And I have a function that takes only value 0, 1. So I can decompose it into this part where it takes value 0 and the part where it takes value 1. And those can be super complicated, right? Can have a thing like this. Can have some weird little islands where it takes value 1. I can have some islands where it's takes value 0. I can have some weird stuff going on. But I can always partition it into the value where it takes value 0 and the value where it takes value 1. And the value where it takes 1, if psi is equal to 1, this is called the rejection region of the plot, OK? So just the samples that would lead me to rejecting. And notice that this is the indicator of the rejection region. The test is the indicator of the rejection region. So there's two ways you can make an error when there's a test. Either the truth is in h1, and you're saying actually it's h1. Or the truth is in h1, and you say it's h0. And that's how we build-in the asymmetry between h0 and h1. We control only one of the two errors. And we hope for the best for the second one. So the type 1 error is the one that says, well, if it is actually the status quo, but a claim that there is a discovery-- if it's actually h0, but I claim that I'm in h1, then I admit I commit a type I error. And so the probability of type I error is this function alpha of psi, which is the probability of saying that psi is equal to 1 when theta is in h0. Now, the problem is that this is not just number, because theta is just like moving all over h0, right? There's many values that theta can be, right? So theta is somewhere here. I erased it, OK. All right. For simplicity, we're going to think of theta as being mu and 103.5, OK? And so, I know that this is theta 1. And just this point here was theta 0, OK? Agreed? This is with the Cherry Blossom Run. Now, here in this case, it's actually easy. I need to compute this function alpha of psi, which maps theta in theta 0 to p theta of psi equals 1. So that's the probability that I reject when theta is in h0. Then there's only one of them to compute, because theta can only take this one value. So this is really 103.5. OK. So that's the probability that I reject when the true mean was 103.5. Now, if I was testing whether-- if h0 was this entire guy here, all the values larger than 103.5, then I would have to compute this function for all possible values of the theta in there. And guess what? The worst case is when it's going to be here. Because it's so close to the alternative that that's where I'm making the most error possible. And then there's the type 2 error, which is defined basically in the symmetric ways. The function that maps theta to the probability. So that's the probability of type 2 errors. The probability that I fail to reject h0, right? If psi is equal to 0, I fail to reject h0. But that actually came from h1, OK? So in this example, let's clear. If I'm here, like if the true mean was 100, I'm looking at the probability that the true mean is actually 100, and I'm actually saying it was 103.5. Or it's not less than 103.5. Yeah? STUDENT: I'm just still confused by the notation. When you say that [INAUDIBLE] theta sub 1 arrow r, I'm not sure what that notation means. PROFESSOR: Well, this just means it's a function that maps theta 0 to r. You've seen functions, right? OK. So that's just the way you write. So that means that's a function f that goes from, say, r r, and that maps x to x squared. OK. So here, I'm just saying I don't have to consider all possible values. I'm only considering the values on theta 0. I put r actually. I could restrict myself to the interval 0, 1, because those are probabilities. So it's just telling me where my function comes from and where my function goes to. And beta is a function, right? So beta psi of theta is just the probability that theta is equal to 1. And I could define that for all thetas-- sorry. If psi is equal to 0 in this case. And that could define that for all thetas. But the only ones that lead to an error are the thetas that are in h1. I mean, I can define this function. It's just not going to correspond to an error, OK? And the power of a test is the smallest-- so the power is basically 1 minus an error. 1 minus the probability of an error. So it's the probability of making a correct decision, OK? So it's the probability of making a correct decision under h1, that's what the power is. But again, this could be a function. Because there's many ways that can be in h1 if h1 is an entire set of numbers. For example, all the numbers there are less than 103.5. And so, what I'm doing here when I define the power of a test, I'm looking at the smallest possible of those values, OK? So I'm looking at this function. Maybe I should actually expand a little more on this. OK. So beta psi of theta is the probability under theta that psi is equal to 0, right? That's the probability in theta 1, which means then the alternative, that they feel to reject. And I really should, because theta was actually in theta 1, OK? So this thing here is the probability of type 2 error. Now, this is 1 minus the probability that I did reject and I should have rejected. That's just a little off the complement. Because if psi is not equal to 0, then it's equal to 1. So now if I rearrange this, it tells me that the probability that psi is equal to 1-- this is actually 1 minus beta psi of theta. So that's true for all thetas in theta 1. And what I'm saying is, well, this is now a good thing, right? This number being large is a good thing. It means I should have rejected, and I rejected. I want this to happen with large probability. And so, what I'm going to look at is the most conservative choice of this number, right? Rather than being super optimistic and say, oh, but indeed if theta was actually equal to zero, then I'm always going to conclude that-- I mean, if mu is equal to 0, everybody runs in 0 seconds, then I with high probability I'm actually going to make no mistake. But really, I should look at the worst possible case, OK? So what I'm looking at is basically the smallest value it can take on theta one is called power of psi. Power of the test psi, OK? So that's the smallest possible value it can take. All right. So I'm sorry. This is a lot of definitions that you have to sink in. And it's not super pleasant. But that's what testing is. There's a lot of jargon. Those are actually fairly simple things. Just maybe you should get a sheet for yourself. And say, these are the new terms that I learned. What is their test, rejection region? Probably of type I error, probably of type 2 error, and power. Just make sure you know what those guys are. Oh. And null and alternative hypothesis, OK? And once you know all these things, you know what I'm talking about. You know what I'm referring to. And this is just jargon. But in the end, those are just probabilities. I mean, these a natural quantities. Just for some reason, people have been used to using different terminology. So just to illustrate. When do I make a typo 1 error? And when do I not make a type 1 error? So I make a type 1 error if h0 is true and I reject h0, right? So the off diagonal blocks are when I make an error. When I'm on the diagonal terms, h1 is true and I reject h0, that's a correct decision. When h0 is true and I fail to reject h0, that's also the correct decision to make. So I only make errors when I'm in one of the red blocks. And one block is the type 1 error and the other block is the type 2 error. That's all it means, OK? So you just have to know which one we called one. I mean, this was chosen in a pretty ad hoc way. So to conclude this lecture, let me ask you a few questions. If in a US court, the defendant is found either say, let's just say for the sake of discussion, innocent or guilty. All right? It's really guilty for not guilty, but let's say innocent or guilty. When does the jury make a type 1 error? Yep? And he's guilty? And he's innocent, right? The status quo, everybody is innocent until proven guilty. So that's our h0 is that the person is innocent. And so, that means that h0 is innocent. And so, we're looking at the probably of type 1 error, so that's when we reject the fact that it's innocent. So conclude that this person is guilty, OK? So type 1 error is when this person is innocent and we conclude it's guilty. What is the type 2 error? Letting a guilty person go free, which actually according to the constitution, is the better of the two. All right? So what we're going to try to do is to control the first one, and hope for the best for the second one. How could the jury make sure that they make no type 1 error ever? Always let the guy go free, right? What is the effect on the type 2 error? Yeah, it's the worst possible, right? I mean, basically, for every guy that's guilty, you let them go. That's the worst you can do. And same thing, right? How can the jury make sure that there's no type 2 error? Always convict. What is the effect on the American budget? What is the effect on the type 1 error? Right. So the effect is that basically the type 1 error is maximized. So there's this trade off between type 1 and type 2 error that's inherent. And that's why we have this sort of multi objective thing. We're trying to minimize two things at the same time. And I can't find many ad hoc ways, right? So if you've taken any optimization, trying to optimize two things when one is going up while the other one is going down, the only thing you can do is make ad hoc heuristics. Maybe you try to minimize the sum of those two guys. Maybe you try to minimize 1/3 of the first guy plus 2/3 of the second guy. Maybe you try to minimize the first guy plus the square of the second guy. You can think of many ways, but none of them is more justified than the other. However, for statistical hypothesis testing, there's one that's very well justified, which is just constrain your type 1 error to be the smallest, to be at a level that you deem acceptable. 5%. I want to convict at most 5% of innocent people. That's what I deem reasonable. And based on that, I'm going to try to convict as many people as they can, all right? So that's called the Nieman Pearson paradigm, and we'll talk about it next time. All right. Thank you.
MIT_18650_Statistics_for_Applications_Fall_2016
14_Regression_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: [INAUDIBLE] minus xi transpose t. I just pick whatever notation I want from a variable. And let's say it's t. So that's the least squares estimator. And it turns out that, as I said last time, it's going to be convenient to think of those things as matrices. So here, I already have vectors. I've already gone from one dimension, just real valued random variables through random vectors when I think of each xi, but if I start stacking them together, I'm going to have vectors and matrices that show up. So the first vector I'm getting is y, which is just a vector where I have y1 to yn. Then I have-- so that's a boldface vector. Then I have x, which is a matrix where I have-- well, the first coordinate is always 1. So I have 1, and then x1 xp minus 1, and that's-- sorry, x1 xp minus 1, and that's for observation 1. And then I have the same thing all the way down for observation n. OK, everybody understands what this is? So I'm just basically stacking up all the xi's. So this i-th row is xi transpose. I am just stacking them up. And so if I want to write all these things to be true for each of them, all I need to do is to write a vector epsilon, which is epsilon 1 to epsilon n. And what I'm going to have is that y, the boldface vector, now is equal to the matrix x times the vector beta plus the vector epsilon. And it's really just exactly saying what's there, because for 2-- so this is a vector, right? This is a vector. And what is the dimension of this vector? n, so this is n observations. And for all these-- for two vectors to be equal, I need to have all the coordinates to be equal, and that's exactly the same thing as saying that this holds for i equal 1 to n. But now, when I have this, I can actually rewrite the sum for t equals-- sorry, for i equals 1 to n of yi minus xi transpose beta squared, this turns out to be equal to the Euclidean norm of the vector y minus the matrix x times beta squared. And I'm going to put a 2 here so we know we're talking about the Euclidean norm. This just means this is the Euclidean norm. That's the one we've seen before when we talked about chi squared-- that's the square norm is the sum of the square of the coefficients, and then I take a square root, but here I have an extra square. So it's really just the sum of the square of the coefficients, which is this. And here are the coefficients. So then, that I write this thing like that, then minimizing-- so my goal here, now, is going to solve minimum over t in our p of y minus x times t2 squared. And just like we did for one dimension, we can actually write optimality conditions for this. I mean, this is a function. So this is a function from rp to r. And if I want to minimize it, all I have to do is to take its gradient and set it equal to 0. So minimum, set gradient to 0. So that's where it becomes a little complicated. Now I'm going to have to take the gradient of this norm. It might be a little annoying to do. But actually, what's nice about those things-- I mean, I remember that it was a bit annoying to learn. I mean, it's just basically rules of calculus that you don't use that much. But essentially, you can actually expend this norm. And you will see that the rules are basically the same as in one dimension, you just have to be careful about the fact that matrices do not commute. So let's expand this thing. y minus xt squared-- well, this is equal to the norm of y squared plus the norm of x squared plus 2 times y transpose xt. That's just expanding the square in more dimensions. And this, I'm actually going to write as y squared plus-- so here, the norm squared of this guy, I always have that the norm of x squared is equal to x transpose x. So I'm going to write this as x transpose x, so it's t transpose x transpose xt plus 2 times y transpose xt. So now, if I'm going to take the gradient with respect to t, I have basically three terms, and each of them has some sort of a different nature. This term is linear in t, and it's going to differentiate the same way that I differentiate a times x. I'm just going to keep the a. This guy is quadratic. t appears twice. And this guy, I'm going to pick up a 2, and it's going to differentiate just like when I differentiate a times x squared. It's 2 times ax. And this guy is a constant with respect to t, so it's going to differentiate to 0. So when I compute the gradient-- now, of course, all of these rules that I give you you can check by looking at the partial derivative with respect to each coordinate. But arguably, it's much faster to know the rules of differentiability. It's like if I gave you the function exponential x and I said, what is the derivative, and you started writing, well, I'm going to write exponential x plus h minus exponential ax divided by h and let h go to 0. That's a bit painful. AUDIENCE: Why did you transpose your-- why does x have to be [INAUDIBLE]?? PHILIPPE RIGOLLET: I'm sorry? AUDIENCE: I was wondering why you times t times the [INAUDIBLE]? PHILIPPE RIGOLLET: The transpose of 2ab is b transpose a transpose. If you're not sure about this, just make a and b have different size, and then you will see that there's some incompatibility. I mean, there's basically only one way to not screw that one up, so that's easy to remember. So if I take the gradient, then it's going to be equal to what? It's going to be 0 plus-- we said here, this is going to differentiate like-- so think a times x squared. So I'm going to have 2ax. So here, basically, this guy is going to go to x transpose xt. Now, I could have made this one go away, but that's the same thing as saying that my gradient is-- I can think of my gradient as being either a horizontal vector or a vertical vector. So if I remove this guy, I'm thinking of my gradient as being horizontal. If I remove that guy, I'm thinking of my gradient as being vertical. And that's what I want to think of, typically-- vertical vectors, column vectors. And then this guy, well, it's like these guys just think a times x. So the derivative is just a, so I'm going to keep only that part here. Sorry, I forgot a minus somewhere-- yeah, here. Minus 2y transpose x. And what I want is this thing to be equal to 0. So t, the optimal t, is called beta hat and satisfies-- well, I can cancel the 2's and put the minus on the other side, and what I get is that x transpose xt is equal to y transpose x. Yeah, that's not working for me. Yeah, that's because when I took the derivative, I still need to make sure-- so it's the same question of whether I want things to be columns or rows. So this is not a column. If I remove that guy, y transpose t is a row. So I'm just going to take the transpose of this guy to make things work, and this is just going to be x transpose y. And this guy is x transpose y so that I have columns. So this is just the linear equation in t. And I have to solve it, so it's of the form some matrix times t is equal to another vector. And so that's basically in your system. And the way to solve it, at least formally, is to just take the inverse of the matrix on the left. So if x transpose x is invertible, then-- sorry, that's beta hat is the t I want. I get that beta hat is equal to x transpose x inverse x transpose y. And that's the least squares estimator. So here, I use this condition. I want it to be invertible so I can actually write its inverse. Here, I wrote, rank of x is equal to p. What is the difference? Well, there's basically no difference. Basically, here, I have to assume-- what is the size of the matrix x transpose x? [INTERPOSING VOICES] PHILIPPE RIGOLLET: Yeah, so what is the size? AUDIENCE: p by p. PHILIPPE RIGOLLET: p by p. So this matrix is invertible if it's a rank p, if you know what rank means. If you don't, that just rank p means that it's invertible. So it's full rank and it's invertible. And the rank of x transpose x is actually just the rank of x because this is the same matrix that you apply twice. And that's all it's saying. So if you're not comfortable with the notion of rank that you see here, just think of this condition just being the condition that x transpose x is invertible. And that's all it says. What it means for it to be invertible-- this was true. We made no assumption up to this point. If x is not invertible, it means that there might be multiple solutions to this equation. In particular, for a matrix to not be invertible, it means that there's some vector v. So if x transpose x is not invertible, then this is equivalent to there exists a vector v, which is not 0, and such that x transpose xv is equal to 0. That's what it means to not be invertible. So in particular, if beta hat is a solution-- so this equation is sometimes called score equations, because the gradient is called the score, and so you're just checking if the gradient is equal to 0. So if beta hat satisfies star, then so does beta hat plus lambda v for all lambda in the real line. And the reason is because, well, if I start looking at-- what is x transpose x times beta hat plus lambda v? Well, by linearity, this is just x transpose x beta hat plus lambda x transpose x times v. But this guy is what? It's 0, just because that's what we assumed. We assumed that x transpose xv was equal to 0, so we're left only with this part, which, by star, is just x transpose y. So that means that x transpose x beta hat plus lambda v is actually equal to x transpose y, which means that there's another solution, which is not just beta hat, but any move of beta hat along this direction v by any size. So that's going to be an issue, because you're looking for one estimator. And there's not just one, in this case, there's many. And so this is not going to be well-defined and you're going to have some issues. So if you want to talk about the least squares estimator, you have to make this assumption. What does it imply in terms of, can I think of p being too n, for example, in this case? What happens if p is equal to 2n? AUDIENCE: Well, then the rank of your matrix is only p/2. PHILIPPE RIGOLLET: So the rank of your matrix is only p/2, so that means that this is actually not going to happen. I mean, it's not only p/2, it's at most p/2. It's at most the smallest of the two dimensions of your matrix. So if your matrix is n times 2n, it's at most n, which means that it's not going to be full rank, so it's not going to be invertible. So every time the dimension p is larger than the sample size, your matrix is not invertible, and you cannot talk about the least squares estimator. So that's something to keep in mind. And it's actually a very simple thing. It's essentially saying, well, if p is lower than n, it means that you have more parameters to estimate than you have equations to estimate it. So you have this linear system. There's one equation per observation. Each row, which was each observation, was giving me one equation. But then the number of unknowns in this linear system is p, and so I cannot solve linear systems that have more unknowns than they have equations. And so that's basically what's happening. Now, in practice, if you think about what data sets look like these days, for example, people are trying to express some phenotype. So phenotype is something you can measure on people-- maybe the color of your eyes, or your height, or whether you have diabetes or not, things like this, so things that are macroscopic. And then they want to use the genotype to do that. They want to measure your-- they want to sequence your genome and try to use this to predict whether you're going to be responsive to a drug or whether your r's are going to be blue, or something like this. Now, the data sets that you can have-- people, maybe, for a given study about some sort of disease. Maybe you will sequence the genome of maybe 100 people. n is equal to 100. p is basically the number of genes they're sequencing. This is of the order of 100,000. So you can imagine that this is a case where n is much, much smaller than p, and you cannot talk about the least squares estimator. There's plenty of them. There's not just one line like that, lambda times v that you can move away. There's basically an entire space in which you can move, and so it's not well-defined. So at the end of this class, I will give you a short introduction on how you do this. This actually represents more and more. It becomes a more and more preponderant part of the data sets you have to deal with, because people just collect data. When I do the sequencing, the machine allows me to sequence 100,000 genes. I'm not going to stop at 100 because doctors are never going to have cohorts of more than 100 patients. So you just collect everything you can collect. And this is true for everything. Cars have sensors all over the place, much more than they actually gather data. There's data, there's-- we're creating, we're recording everything we can. And so we need some new techniques for that, and that's what high-dimensional statistics is trying to answer. So this is way beyond the scope of this class, but towards the end, I will give you some hints about what can be done in this framework because, well, this is the new reality we have to deal with. So here, we're in a case where p's less than n and typically much smaller than n. So the kind of orders of magnitude you want to have is maybe p's of order 10 and n's of order 100, something like this. So you can scale that, but maybe 10 times larger. So maybe you cannot solve this guy b for b hat, but actually, you can talk about x times b hat, even if p is larger than n. And the reason is that x times b hat is actually something that's very well-defined. So what is x times b hat? Remember, I started with the model. So if I look at this definition, essentially, what I had as the original thing was that the vector y was equal to x times beta plus the vector epsilon. That was my model. So beta is actually giving me something. Beta is actually some parameter, some coefficients that are interesting. But a good estimator for-- so here, it means that the observations that I have are of the form x times beta plus some noise. So if I want to adjust the noise, remove the noise, a good candidate to do noise is x times beta hat. x times beta hat is something that should actually be useful to me, which should be close to x times beta. So in the one-dimensional case, what it means is that if I have-- let's say this is the true line, and these are my x's, so I have-- these are the true points on the real line, and then I have my little epsilon that just give me my observations that move around this line. So this is one of epsilons, say epsilon i. Then I can actually either talk-- to say that I recovered the line, I can actually talk about recovering the right intercept or recovering the right slope for this line. Those are the two parameters that I need to recover. But I can also say that I've actually found a set of points that's closer to being on the line that are closer to this set of points right here than the original crosses that I observed. So if we go back to the picture here, for example, what I could do is say, well, for this point here-- there was an x here-- rather than looking at this dot, which was my observation, I can say, well, now that I've estimated the red line, I can actually just say, well, this point should really be here. And actually, I can move all these dots so that they're actually on the red line. And this should be a better value, something that has less noise than the original y value that I should see. It should be close to the true value that I should be seeing without the extra noise. So that's definitely something that could be of interest. For example, in imaging, you're not trying to understand-- so when you do imaging, y is basically an image. So think of a pixel image, and you just stack it into one long vector. And what you see is something that should look like some linear combination of some feature vectors, maybe. So there's people created a bunch of features. They're called, for example, Gabor frames or wavelet transforms-- so just well-known libraries of variables x such that when you take linear combinations of those guys, this should looks like a bunch of images. And what you want for your image-- you don't care what the coefficients of the image are in these bases that you came up with. What you care about is the noise in the image. And so you really want to get x beta. So if you want to estimate x beta, well, you can use x beta hat. What is x beta hat? Well, since beta hat is x transpose x inverse x transpose y, this is x transpose. That's my estimator for x beta. Now, this thing, actually, I can define even if I'm not low rank. So why is this thing interesting? Well, there's a formula for this estimator, but actually, I can visualize what this thing is. So let's assume, for the sake of illustration, that n is equal to 3. So that means that y lives in a three-dimensional space. And so let's say it's here. And so I have my, let's say, y's here. And I also have a plane that's given by the vectors x1 transpose x2 transpose, which is, by the way, 1-- sorry, that's not what I want to do. I'm going to say that n is equal to 3 and that p is equal to 2. So I basically have two vectors, 1, 1 and another one, let's assume that it's, for example, abc. So those are my two vectors. This is x1, and this is x2. And those are my three observations for this guy. So what I want when I minimize this, I'm looking at the point which can be formed as the linear combination of the columns of x, and I'm trying to find the guy that's the closest to y. So what does it look like? Well, the two points, 1, 1, 1 is going to be, say, here. That's the point 1, 1, 1. And let's say that abc is this point. So now I have a line that goes through those two guys. That's not really-- let's say it's going through those two guys. And this is the line which can be formed by looking only at linear combination. So this is the line of x times t for t in r2. That's this entire line that you can get. Why is it-- yeah, sorry, it's not just a line, I also have to have t, all the 0's thing. So that actually creates an entire plane, which is going to be really hard for me to represent. I don't know. I mean, maybe I shouldn't do it in these dimensions. So I'm going to do it like that. So this plane here is the set of xt for t and r2. So that's a two-dimensional plane, definitely goes to 0, and those are all these things. So think of a sheet of paper in three dimensions. Those are the things I can get. So now, what I'm going to have as y is not necessarily in this plane. y is actually something in this plane, x beta plus some epsilon. y is x beta plus epsilon. So I start from this plane, and then I have this epsilon that pushes me, maybe, outside of this plane. And what least squares is doing is saying, well, I know that epsilon should be fairly small, so the only thing I'm going to be doing that actually makes sense is to take y and find the point that's on this plane that's the closest to it. And that corresponds to doing an orthogonal projection of y onto this thing, and that's actually exactly x beta hat. So in one dimension, just because this is actually a little hard-- in one dimension, so that's if p is equal to 1. So let's say this is my point. And then I have y, which is in two dimensions, so this is all on the plane. What it does, this is my-- the point that's right here is actually x beta hat. That's how you find x beta hat. You take your point y and you project it on the linear span of the columns of x. And that's x beta hat. This does not tell you exactly what beta should be. And if you know a little bit of linear algebra, it's pretty clear, because if you want to find beta hat, that means that you should be able to find the coordinates of a point in the system of columns of x. And if those guys are redundant, there's not going to be unique coordinates for these guys, so that's why it's actually not easy to find. But x beta hat is uniquely defined. It's a projection. Yeah? AUDIENCE: And epsilon is the distance between the y and the-- PHILIPPE RIGOLLET: No, epsilon is the vector that goes from-- so there's a true x beta. That's the true one. It's not clear. I mean, x beta hat is unlikely to be exactly equal to x beta. And then the epsilon is the one that starts from this line. It's the vector that pushes you away. So really, this is this vector. That's epsilon. So it's not a length. The lengths of epsilon is the distance, but epsilon is just the actual vector that takes you from one to the other. So this is all in two dimensions, and it's probably much clearer than what's here. And so here, I claim that this x beta hat-- so from this picture, I implicitly claim that forming this operator that ticks y and maps it into this vector x times x transpose y, blah, blah, blah, this should actually be equal to the projection of y onto the linear span of the columns of x. That's what I just drew for you. And what it means is that this matrix must be the projection matrix. So of course, anybody-- who knows linear algebra here? OK, wow. So what are the conditions that a projection matrix should be satisfying? AUDIENCE: Squares through itself. PHILIPPE RIGOLLET: Squares through itself, right. If I project twice, I'm not moving. If I keep on iterating projection, once I'm in the space I'm projecting onto, I'm not moving. What else? Do they have to be symmetric, maybe? AUDIENCE: If it's an orthogonal projection. PHILIPPE RIGOLLET: Yeah, so this is an orthogonal projection. It has to be symmetric. And that's pretty much it. So from those things, you can actually get quite a bit of things. But what's interesting is that if you actually look at the eigenvalues of this matrix, they should be either 0 or 1, essentially. And they are 1 if the eigenvector associated is within this space, and 0 otherwise. And so that's basically what you can check. This is not an exercise in linear algebra, so I'm not going to go too much into those details. But this is essentially what you want to keep in mind. What's associated to orthogonal projections is Pythagoras theorem. And that's something that's going to be useful for us. What it's essentially telling is that if I look at this norm squared, it's equal to this norm squared-- sorry, this norm squared plus this norm squared is equal to this norm squared. And that's something the norm of y squared. So Pythagoras tells me that the norm of y squared is equal to the norm of x beta hat squared plus the norm of y minus x beta hat squared. Agreed? It's just because I have a straight angle here. So that's this plus this is equal to this. So now, to define this, I made no assumption. Epsilon could be as wild. I was just crossing my fingers that epsilon was actually small enough that it would make sense to project onto the linear span, because I implicitly assumed that epsilon did not take me all the way there, so that actually, it makes sense to project back. And so for that, I need to somehow make assumptions that epsilon is well-behaved and that it's completely wild, that it's moving uniformly in all directions of the space. There's no privileged direction where it's always going, otherwise, I'm going to make a systematic error. And I need that those epsilons are going to average somehow. So here are the assumptions we're going to be making so that we can actually do some statistical inference. The first one is that the design matrix is deterministic. So I started by saying the x-- I have xi, yi, and maybe they're independent. Here, they are, but the xi's, I want to think as deterministic. If they're not deterministic, it can condition on them, but otherwise, it's very difficult to think about this thing if I think of those entries as being random, because then I have the inverse of a random matrix, and things become very, very complicated. So we're to think of those guys as being deterministic. We're going to think of the model as being homoscedastic. And actually, let me come back to this in a second. Homoscedastic-- well, I mean, if you're trying to find the etymology of this word, "homo" means the same, "scedastic" means scaling. So what I want to say is that the epsilons have the same scaling. And since my third assumption is that epsilon is Gaussian, then essentially, what I'm going to want is that they all share the same sigma squared. They're independent, so this is definitely in the identity covariance matrix. And I want them to be centered, as well. That means that there's no direction that I'm always privileging when I'm moving away from my plane there. So these are important conditions. It depends on how much inference you want to do. If you want to write t-tests, you need all these assumptions. But if you only want to write, for example, the fact that your least squares estimator is consistent, you really just need the fact that epsilon has variance sigma squared. The fact that it's Gaussian won't matter, just like Gaussianity doesn't matter for a large number. Yeah? AUDIENCE: So the first assumption that x has to be deterministic, but I just made up this x1, x2-- PHILIPPE RIGOLLET: x is the matrix. AUDIENCE: Yeah. So most are random variables, right? PHILIPPE RIGOLLET: No, that's the assumption. AUDIENCE: OK. So I mean, once we collect the data and put it in the matrix, it becomes deterministic. So maybe I'm missing something. PHILIPPE RIGOLLET: Yeah. So this is for the purpose of the analysis. I can actually assume that-- I look at my data, and I think of this. So what is the difference between thinking of data as deterministic or thinking of it as random? When I talked about random data, the only assumptions that I made were about the distribution. I said, well, if my x is a random variable, I want it to have this variance and I want it to have, maybe, this distribution, things like this. Here, I'm actually making an assumption on the values that I see. I'm seeing that the value that you give me is-- the matrix is actually invertible. x transpose x will be invertible. So I've never done that before, assuming that some random variable-- assuming that some Gaussian random variable was positive, for example. We don't do that, because there's always some probability that things don't happen if you make things at random. And so here, I'm just going to say, OK, forget about-- here, it's basically a little stronger. I start my assumption by saying, the data that's given to me will actually satisfy those assumptions. And that means that I don't actually need to make some modeling assumption on this thing, because I'm actually putting directly the assumption I want to see. So here, either I know sigma squared or I don't know sigma squared. So is that clear? So essentially, I'm assuming that I have this model, where this guy, now, is deterministic, and this is some multivariate Gaussian with mean 0 and covariance matrix identity of rn. That's the model I'm assuming. And I'm observing this, and I'm given this matrix x. Where does this make sense? You could say, well, if I think of my rows as being people and I'm collecting genes, it's a little intense to assume that I actually know, ahead of time, what I'm going to be seeing, and that those things are deterministic. That's true, but it still does not prevent the analysis to go through, for one. And second, a better example might be this imaging example that I described, where those x's are actually libraries. Those are libraries of patterns that people have created, maybe from deep learning nets, or something like this. But they've created patterns, and they say that all images should be representable as a linear combination of those patterns. And those patterns are somewhere in books, so they're certainly deterministic. Everything that's actually written down in a book is as deterministic as it gets. Any questions about those assumptions? Those are the things we're going to be working with. There's only three of them. One is about x. Actually, there's really two of them. I mean, this guy already appears here. So there's two-- one on the noise, one on the x's. That's it. Those things allow us to do quite a bit. They will allow us to-- well, that's actually-- they allow me to write the distribution of beta hat, which is great, because when I know the distribution of my estimator, I know it's fluctuations. If it's centered around the true parameter, I know that it's going to be fluctuating around the true parameter. And it should tell me what kind of distribution the fluctuations are. I actually know how to build confidence intervals. I know how to build tests. I know how to build everything. It's just like when I told you that asymptotically, the empirical variance was Gaussian with mean theta and standard deviation that depended on n, et cetera, that's basically the only thing I needed. And this is what I'm actually getting here. So let me start with this statement. So remember, beta hat satisfied this, so I'm going to rewrite it here. So beta hat was equal to x transpose x inverse x transpose y. That was the definition that we found. And now, I also know that y was equal to x beta plus epsilon. So let me just replace y by x beta plus epsilon here. Yeah? AUDIENCE: Isn't it x transpose x inverse x transpose y? PHILIPPE RIGOLLET: Yes, x transpose. Thank you. So I'm going to replace y by x beta plus epsilon. So that's-- and here comes the magic. I have an inverse of a matrix, and then I have the true matrix, I have the original matrix. So this is actually the identity times beta. And now this guy, well, this is a Gaussian, because this is a Gaussian random vector, and I just multiply it by a deterministic matrix. So we're going to use the rule that if I have, say, epsilon, which is n0 sigma, then b times epsilon is n0-- can somebody tell me what the covariance matrix of b epsilon is? AUDIENCE: What is capital B in this case? PHILIPPE RIGOLLET: It's just a matrix. And for any matrix, I mean any matrix that I can premultiply-- that I can postmultiply with epsilon. Yeah? AUDIENCE: b transpose b. PHILIPPE RIGOLLET: b transpose? AUDIENCE: Times b. PHILIPPE RIGOLLET: And sigma is gone. AUDIENCE: Oh, times sigma, sorry. PHILIPPE RIGOLLET: That's the matrix, right? AUDIENCE: b transpose sigma b. PHILIPPE RIGOLLET: Almost. Anybody wants to take a guess at the last one? I think we've removed all other possibilities. It's b sigma b transpose. So if you ever answered to the question, do you know Gaussian random vectors, but you did not know that, there's a gap in your knowledge that you need to fill, because that's probably the most important property of Gaussian vectors. When you multiply them by matrices, you have a simple rule on how to update the covariance matrix. So here, sigma is the identity. And here, this is the matrix b that I had here. So what this is is, basically, n, some multivariate n, of course. Then I'm going to have 0. And so what I need to do is b times the identity times b transpose, which is just b b transpose. And what is it going to tell me? It's x transpose x-- sorry, that's inverse-- inverse x transpose, and then the transpose of this guy, which is x x transpose x inverse transpose. But this matrix is symmetric, so I'm actually not going to make the transpose of this guy. And again, magic shows up. Inverse times the matrix of those two guys cancel, and so this is actually equal to beta plus some n0 x transpose x inverse. Yeah? AUDIENCE: I'm a little lost on the [INAUDIBLE].. So you define that as the b matrix, and what happens? PHILIPPE RIGOLLET: So I just apply this rule, right? AUDIENCE: Yeah. PHILIPPE RIGOLLET: So if I multiply a matrix by a Gaussian, then let's say this Gaussian had mean 0, which is the case of epsilon here, then the covariance matrix that I get is b times the original covariance matrix times b transpose. So all I did is write this matrix times the identity times this matrix transpose. And the identity, of course, doesn't play any role, so I can remove it. It's just this matrix, then the matrix transpose. And what happened? So what is the transpose of this matrix? So I used the fact that if I look at x transpose x inverse x transpose, and now I look at the whole transpose of this thing, that's actually equal 2. And I use the rule that ab transpose is b transpose a transpose-- let me finish-- and it's x x transpose x inverse. Yes? AUDIENCE: I thought the-- for epsilon, it was sigma squared. PHILIPPE RIGOLLET: Oh, thank you. There's a sigma squared somewhere. So this was sigma squared times the identity, so I can just pick up a sigma squared anywhere. So here, in our case, so for epsilon, this is sigma. Sigma squared times the identity, that's my covariance matrix. You seem perplexed. AUDIENCE: It's just a new idea for me to think of a maximum likelihood estimator as a random variable. PHILIPPE RIGOLLET: Oh, it should not be. Any estimator is a random variable. AUDIENCE: Oh, yeah, that's a good point. PHILIPPE RIGOLLET: [LAUGHS] And I have not told you that this was the maximum likelihood estimator just yet. The estimator is a random variable. There's a word-- some people use estimate just to differentiate the estimator while you're doing the analysis with random variables and the values when you plug in the numbers in there. But then, of course, people use estimate because it's shorter, so then it's confusing. So any questions about this computation? Did I forget any other Greek letter along the way? All right, I think we're good. So one thing that it says-- and actually, thank you for pointing this out-- I said there's actually a little hidden statement there. By the way, this answers this question. Beta hat is of the form beta plus something that's centered, so it's indeed of the form Gaussian with mean beta and covariance matrix sigma squared x transpose x inverse. So that's very nice. As long as x transpose x is not huge, I'm going to have something that is close to what I want. Oh, sorry, x transpose x inverse is not huge. So there's a hidden claim in there, which is that least squares estimator is equal to the maximum likelihood estimator. Why does the maximum likelihood estimator just enter the picture now? We've been talking about regression for the past 18 slides. And we've been talking about estimators. And I just dumped on you the least squares estimator, but I never really came back to this thing that we know-- maybe the method of moments, or maybe the maximum likelihood estimator. It turns out that those two things are the same. But if I want to talk about a maximum likelihood estimator, I need to have a likelihood. In particular, I need to have a density. And so if I want a density, I have to make those assumptions, such as the epsilons have this Gaussian distribution. So why is this the maximum likelihood estimator? Well, remember, y is x transpose beta plus epsilon. So I actually have a bunch of data. So what is my model here? Well, its the family of Gaussians on n observations with mean x beta, variance sigma squared identity, and beta lives in rp. Here's my family of distributions. That's the possible distributions for y. And so in particular, I can write the density of y. Well, what is it? It's something that looks like p of x-- well, p of y, let's say, is equal to 1 over-- so now its going to be a little more complicated, but its sigma squared times 2 pi to the p/2 exponential minus norm of y minus x beta squared divided by 2 sigma squared. So that's just the multivariate Gaussian density. I just wrote it. That's the density of a multivariate Gaussian with mean x beta and covariance matrix sigma squared times the identity. That's what it is. So you don't have to learn this by heart, but if you are familiar with the case where p is equal to 1, you can check that you recover what you're familiar with, and this makes sense as an extension. So now, I can actually write my log likelihood. How many observations do I have of this vector y? Do I have n observations of y? I have just one, right? Oh, sorry, I shouldn't have said p, this is n. Everything is in dimension n. So I can think of either having n independent observations of each coordinate, or I can think of having just one observation of the vector y. So when I write my log likelihood, it's just the log of the density at y. And that's the vector y, which I can write as minus n/2 log sigma squared 2 pi minus 1 over 2 sigma squared norm of y minus x beta. And that's, again, my boldface y. And what is my maximum likelihood estimator? Well, this guy does not depend on beta. And this is just a constant factor in front of this guy. So it's the same thing as just minimizing, because I have a minus sign, over all beta and rp. y minus x beta squared, and that's my least squares estimator. Is there anything that's unclear on this board? Any question? So all I used was-- so I wrote my log likelihood, which is just the log of this expression where y is my observation. And that's indeed the observation that I have here. And that was just some constant minus some constant times this quantity that depends on beta. So maximizing this whole thing is the same thing as minimizing only this thing. The minimizers are the same. And so that tells me that I actually just have to minimize the squared norm to get my maximum likelihood estimator. But this used, heavily, the fact that I could actually write exactly what my density was, and that when I took the log of this thing, I had exactly the square norm that showed up. If I had a different density, if, for example, I assumed that my coordinates of epsilons were, say, iid double exponential random variables. So it's just half of an exponential. And the plus is half of an exponential on the negatives. So if I said that, then this would not have the square norm that shows up. This is really idiosyncratic to Gaussians. If I had something else, I would have, maybe, a different norm here, or something different measures the difference between y and x beta. And that's how you come up with other maximum likelihood estimators that leads to other estimators that are not the least squares-- maybe the least absolute deviation, for example, or this fourth movement, for example, that you suggested last time. So I can come up with a bunch of different things, and they might be tied-- maybe I can come up from them from the same perspective that I came from the least squares estimator. I said, let's just do something smart and check, then, that it's indeed the maximum likelihood estimator. Or I could just start with the modeling on-- and check, then, what happens-- what was the implicit assumption that I put on my noise. Or I could start with the assumption of the noise, compute the maximum likelihood estimator and see what it turns into. So that was the first thing. I've just proved to you the first line. And from there, you can get what you want. So all the other lines are going to follow. So what is beta hat-- so for example, let's look at the second line, the quadratic risk. Beta hat minus beta, from this formula, has a distribution, which is n n0, and then I have x transpose x inverse. AUDIENCE: Wouldn't the dimension be p on the board? PHILIPPE RIGOLLET: Sorry, the dimension of what? AUDIENCE: Oh beta hat minus beta. Isn't beta only a p dimensional? PHILIPPE RIGOLLET: Oh, yeah, you're right, you're right. That was all p dimensional there. Yeah. So if b here, the matrix that I'm actually applying, has dimension p times n-- so even if epsilon was an n dimensional Gaussian vector, then b times epsilon is a p dimensional Gaussian vector now. So that's how I switch from p to n-- from n to p. Thank you. So you're right, this is beta hat minus beta is this guy. And so in particular, if I look at the expectation of the norm of beta hat minus beta squared, what is it? It's the expectation of the norm of some Gaussian vector. And so it turns out-- so maybe we don't have-- well, that's just also a property of a Gaussian vector. So if epsilon is n0 sigma, then the expectation of the norm of epsilon squared is just the trace of sigma. Actually, we can probably check this by saying that this is the sum from j equal 1 to p of the expectation of beta hat j minus beta j squared. Since beta j squared is the expectation-- beta j is the expectation of beta hat. This is actually equal to the sum from j equal 1 to p of the variance of beta hat j, just because this is the expectation of beta hat. And how do I read the variances in a covariance matrix? There are just the diagonal elements. So that's really just sigma jj. And so that's really equal to-- so that's the sum of the diagonal elements of this matrix. Let's call it sigma. So that's equal to the trace of x transpose x inverse. The trace is the sum of the diagonal elements of a matrix. And I still had something else. I'm sorry, this was sigma squared. I forget it all the time. So the sigma squared comes out. It's there. And so the sigma squared comes out because the trace is a linear operator. If I multiply all the entries of my matrix by the same number, then all the diagonal elements are multiplied by the same number, so when I sum them, the sum is multiplied by the same number. So that's for the quadratic risk of beta hat. And now I need to tell you about x beta hat. x beta hat was something that was actually telling me that that was the point that I reported on the red line that I estimated. That was my x beta hat. That was my y minus the noise. Now, this thing here-- so remember, we had this line, and I had my observation. And here, I'm really trying to measure this distance squared. This distance is actually quite important for me because it actually shows up in the Pythagoras theorem. And so you could actually try to estimate this thing. So what is the prediction error? So we said we have y minus x beta hat, so that's the norm of this thing we're trying to compute. But let's write this for what it is for one second. So we said that beta hat was x transpose x inverse extra transpose y, and we know that y is x transpose beta plus epsilon. So let's write this-- x beta plus epsilon plus x. And actually, maybe I should not write it. Let me keep the y for what it is now. So that means that I have, essentially, the identity of rn times y minus this matrix times y. So I can factor y out, and that's the identity of rn minus x x transpose x inverse x transpose, the whole thing times y. We call this matrix p because this was the projection matrix onto the linear span of the x's. So that means that if I take a point x and I apply p times x, I'm projecting onto the linear span of the columns of x. What happens if I do i minus p times x? It's x minus px. So if I look at the point on which-- so this is the point on which I project. This is x. I project orthogonally to get p times x. And so what it means is that this operator i minus px is actually giving me this guy, this vector here-- x minus p times x. Let's say this is 0. This means that this vector, I can put it here. It's this vector here. And that's actually the orthogonal projection of x onto the orthogonal complement of the span of the columns of x. So if I project x, or if I look of x minus its projection, I'm basically projecting onto two orthogonal spaces. What I'm trying to say here is that this here is another projection matrix p prime. That is just the projection matrix onto the orthogonal-- projection onto orthogonal of column span of x. Orthogonal means the set of vectors that's orthogonal to everyone in this linear space. So now, when I'm doing this, this is exactly what-- I mean, in a way, this is illustrating this Pythagoras theorem. And so when I want to compute the norm of this guy, the norm squared of this guy, I'm really computing-- if this is my y now, this is px of y, I'm really controlling the norm squared of this thing. So if I want to compute the norm squared-- so I'm almost there. So what am I projecting here onto the orthogonal projector? So here, y, now, I know that y is equal to x beta plus epsilon. So when I look at this matrix p prime times y, It's actually p prime times x beta plus p prime times epsilon. What's happening to p prime times x beta? Let's look at this picture. So we know that p prime takes any point here and projects it orthogonally on this guy. But x beta is actually a point that lives here. It's something that's on the linear span. So where do all the points that are on this line get projected to? AUDIENCE: The origin. PHILIPPE RIGOLLET: The origin, to 0. They all get projected to 0. And that's because I'm basically projecting something that's on the column span of x onto its orthogonal. So that's always 0 that I'm getting here. So when I apply p prime to y, I'm really just applying p prime to epsilon. So I know that now, this, actually, is equal to the norm of some multivariate Gaussian. What is the size of this Gaussian? What is the size of this matrix? Well, I actually had it there. It's i n, so it's n dimensional. So it's some n dimensional with mean 0. And what is the covariance matrix of p prime times epsilon? AUDIENCE: p p transpose. PHILIPPE RIGOLLET: Yeah, p prime p prime transpose, which we just said p prime transpose is p, so that's p squared. And we see that when we project twice, it's as if we projected only once. So here, this is n0 p prime p prime transpose. That's the formula for the covariance matrix. But this guy is actually equal to p prime times p prime, which is equal to p prime. So now, what I'm looking for is the norm squared of the trace. So that means that this whole thing here is actually equal to the trace. Oh, did I forget again a sigma squared? Yeah, I forgot it only here, which is good news. So I should assume that sigma squared is equal to 1. So sigma squared's here. And then what I'm left with is sigma squared times the trace of p prime. At some point, I mentioned that the eigenvalues of a projection matrix were actually 0 or 1. The trace is the sum of the eigenvalues. So that means that the trace is going to be an integer number as the number of non-0 eigenvalues. And the non-0 eigenvalues are just the dimension of the space onto which I'm projecting. Now, I'm projecting from something of dimension n onto the orthogonal of a space of dimension p. What is the dimension of the orthogonal of a space of dimension p when thought of space in dimension n? AUDIENCE: [? 1. ?] PHILIPPE RIGOLLET: N minus p-- that's the so-called rank theorem, I guess, as a name. And so that's how I get this n minus p here. This is really just equal to n minus p. Yeah? AUDIENCE: Here, we're taking the expectation of the whole thing. PHILIPPE RIGOLLET: Yes, you're right. So that's actually the expectation of this thing that's equal to that. Absolutely. But I actually have much better. I know, even, that the norm that I'm looking at, I know it's going to be this thing. What is going to be the distribution of this guy? Norm squared of a Gaussian, chi squared. So there's going to be some chi squared that shows up. And the number of degrees of freedom is actually going to be also n minus p. And maybe it's actually somewhere-- yeah, right here-- n minus p times sigma hat squared over sigma squared. This is my sigma hat squared. If I multiply n minus p, I'm left only with this thing, and so that means that I get sigma squared times-- because they always forget my sigma squared-- I get sigma squared times this thing. And it turns out that the square norm of this guy is actually exactly chi squared with n minus b degrees of freedom. So in particular, so we know that the expectation of this thing is equal to sigma squared times n minus p. So if I divide both sides by n minus p, I'm going to have that something whose expectation is sigma squared. And this something, I can actually compute. It depends on y, and x that I know, and beta hat that I've just estimated. I know what n is. And pr's are the dimensions of my matrix x. So I'm actually given an estimator whose expectation is sigma squared. And so now, I actually have an unbiased estimator of sigma squared. That's this guy right here. And it's actually super useful. So those are called the-- this is the normalized sum of square residuals. These are called the residuals. Those are whatever is residual when I project my points onto the line that I've estimated. And so in a way, those guys-- if you go back to this picture, this was yi and this was xi transpose beta hat. So if beta hat is close to beta, the difference between yi and xi transpose beta should be close to my epsilon i. It's some sort of epsilon i hat. Agreed? And so that means that if I think of those as being epsilon i hat, they should be close to epsilon i, and so their norm should be giving me something that looks like sigma squared. And so that's why it actually makes sense. It's just magical that everything works out together, because I'm not projecting on the right line, I'm actually projecting on the wrong line. But in the end, things actually work out pretty well. There's one thing-- so here, the theorem is that this thing not only has the right expectation, but also has a chi squared distribution. That's what we just discussed. So here, I'm just telling you this. But it's not too hard to believe, because it's actually the norm of some vector. You could make this obvious, but again, I didn't want to bring in too much linear algebra. So to prove this, you actually have to diagonalize the matrix p. So you have to invoke the eigenvalue decomposition and the fact that the norm is invariant by rotation. So for those who are familiar with, what I can do is just look at the decomposition of p prime into ud u transpose where this is an orthogonal matrix, and this is a diagonal matrix of eigenvalues. And when I look at the norm squared of this thing, I mean, I have, basically, the norm squared of p prime times some epsilon. It's the norm of ud u transpose epsilon squared. The norm of a rotation of a vector is the same as the norm of the vector, so this guy goes away. This is not actually-- I mean, you don't have to care about this if you don't understand what I'm saying, so don't freak out. This is really for those who follow. What is the distribution of u transpose epsilon? I take a Gaussian vector that has covariance matrix sigma squared times the [? identity, ?] and I basically rotate it. What is its distribution? Yeah? AUDIENCE: The same. PHILIPPE RIGOLLET: It's the same. It's completely invariant, because the Gaussian think of all directions as being the same. So it doesn't really matter if I take a Gaussian or a rotated Gaussian. So this is also a Gaussian, so I'm going to call it epsilon prime. And I am left with just the norm of epsilon primes. So this is the sum of the dj's squared times epsilon j squared. And we just said that the eigenvalues of p are either 0 or 1, because it's a projector. And so here, I'm going to get only 0's and 1's. So I'm really just summing a certain number of epsilon i squared. So square root of standard Gaussians-- sorry, with a sigma squared somewhere. And basically, how many am I summing? Well, the n minus p, the number of non-0 eigenvalues of p prime. So that's how it shows up. When you see this, what theorem am I using here? Cochran's theorem. This is this magic book. I'm actually going to dump everything that I'm not going to prove to you and say, oh, this is actually Cochran's. No, Cochran's theorem is really just telling me something about orthogonality of things, and therefore, independence of things. And Cochran's theorem was something that I used when I wanted to use what? That's something I used just one slide before. Student t-test, right? I used Cochran's theorem to see that the numerator and the denominator of the student statistic were independent of each other. And this is exactly what I'm going to do here. I'm going to actually write a test to test, maybe, if the beta j's are equal to 0. I'm going to form a numerator, which is beta hat minus beta. This is normal. And we know that beta hat has a Gaussian distribution. I'm going to standardized by something that makes sense to me. And I'm not going to go into details, because we're out of time. But there's the sigma hat that shows up. And then there's a gamma j, which takes into account the fact that my x's-- if I look at the distribution of beta, which is gone, I think-- yeah, beta is gone. Oh, yeah, that's where it is. The covariance matrix depends on this matrix x transpose x. So this will show up in the variance. In particular, diagonal elements are going to play a role here. And so that's what my gammas are. The gammas is the j's diagonal element of this matrix. So we'll resume that on Tuesday, so don't worry too much if this is going too fast. I'm not supposed to cover it, but just so you get a hint of why Cochran's theorem actually was useful. So I don't know if we actually ended up recording. I have your homework. And as usual, I will give it to you outside.
MIT_18650_Statistics_for_Applications_Fall_2016
5_Maximum_Likelihood_Estimation_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So I'm using a few things here, right? I'm using the fact that KL is non-negative. But KL is equal to 0 when I take twice the same argument. So I know that this function is always non-negative. So that's theta and that's KL P theta star P theta. And I know that at theta star, it's equal to 0. OK? I could be in the case where I have this happening. I have two-- let's call it theta star prime. I have two minimizers. That could be the case, right? I'm not saying that-- so K of L-- KL is 0 at the minimum. That doesn't mean that I have a unit minimum, right? But it does, actually. What do I need to use to make sure that I have only one minimum? So the definiteness is guaranteeing to me that there's a unique P theta star that minimizes it. But then I need to make sure that there's a unique-- from this unique P theta star, I need to make sure there's a unique theta star that defines this P theta star. Exactly. All right, so I combine definiteness and identifiability to make sure that there is a unique minimizer, in this case cannot exist. OK, so basically, let me write what I just said. So definiteness, that implies that P theta star is the unique minimizer of P theta maps to KL P theta star P theta. So definiteness only guarantees that the probability distribution is uniquely identified. And identifiability implies that theta star is the unique minimizer of theta maps to KL P theta star P theta, OK? So I'm basically doing the composition of two injective functions. The first one is the one that maps, say, theta to P theta. And the second one is the one that maps P theta to the set of minimizers, OK? So at least morally, you should agree that theta star is the minimizer of this thing. Whether it's unique or not, you should agree that it's a good one. So maybe you can think a little longer on this. So thinking about this being the minimizer, then it says, well, if I actually had a good estimate for this function, I would use the strategy that I described for the total variation, which is, well, I don't know what this function looks like. It depends on theta star. But maybe I can find an estimator of this function that fluctuates around this function, and such that when I minimize this estimator of the function, I'm actually not too far, OK? And this is exactly what drives me to do this, because I can actually construct an estimator. I can actually construct an estimator such that this estimator is actually-- of the KL is actually close to the KL, all right? So I define KL hat. So all we did is just replacing expectation with respect to theta star by averages. That's what we did. So if you're a little puzzled by this error, that's all it says. Replace this guy by this guy. It has no mathematical meaning. It just means just replace it by. And now that actually tells me how to get my estimator. It just says, well, my estimator, KL hat, is equal to some constant which I don't know. I mean, it certainly depends on theta star, but I won't care about it when I'm trying to minimize-- minus 1/n sum from i from 1 to n log f theta of x. So here I'm reading it with the density. You have it with the PMF on the slides, and so you have the two versions in front of you, OK? Oh sorry, I forgot the xi. Now clearly, this function I know how to compute. If you give me a theta, since I know the form of the density f theta, for each theta that you give me, I can actually compute this quantity, right? This I don't know, but I don't care. Because I'm just shifting the value of the function I'm trying to minimize. The set of minimizers is not going to change. So now, this is my estimation strategy. Minimize in theta KL hat P theta star P theta, OK? So now let's just make sure that we all agree that-- so what we want is the argument of the minimum, right? arg min means the theta that minimizes this guy, rather than finding the value of the min. OK, so I'm trying to find the arg min of this thing. Well, this is equivalent to finding the arg min of, say, a constant minus 1/n sum from i from 1 to n of log f theta of xi. So that's just-- I don't think it likes me. No. OK, so thus minimizing this average, right? I just plugged in the definition of KL hat. Now, I claim that taking the arg min of a constant plus a function or the arg min of the function is the same thing. Is anybody not comfortable with this idea? OK, so this is the same. By the way, this I should probably switch to the next slide, because I'm writing the same thing, but better. And it's with PMF rather than as PF. OK, now, arg min of the minimum is the same of arg max-- sorry, arg min of the negative thing is the same as arg max without the negative, right? arg max over theta of 1/n from i equal equal 1 to n log f theta of xi. Taking the arg min of the average or the arg min of the sum, again, it's not going to make much difference. Just adding constants OR multiplying by constants does not change the arg min or the arg max. Now, I have the sum of logs, which is the log of the product. OK? It's the arg max of the log of f theta of x1 times f theta of x2, f theta of xn. But the log is a function that's increasing, so maximizing log of a function or maximizing the function itself is the same thing. The value is going to change, but the arg max is not going to change. Everybody agrees with this? So this is equivalent to arg max over theta of pi from 1 to n of f theta xi. And that's because x maps to log x is increasing. So now I've gone from minimizing the KL to minimizing the estimate of the KL to maximizing this product. Well, this chapter is called maximum likelihood estimation. The maximum comes from the fact that our original idea was to minimize the negative of a function. So that's why it's maximum likelihood. And this function here is called the likelihood. This function is really just telling me-- they call it likelihood because it's some measure of how likely it is that theta was the parameter that generated the data. OK, so let's go to the-- well, we'll go to the formal definition in a second. But actually, let me just give you intuition as to why this is the distribution of the data. Why this is the likelihood-- sorry. Why is this making sense as a measure of likelihood? Let's now think for simplicity of the following model. So I have-- I'm on the real line and I look at n, say, theta 1 for theta in the real-- do you see that? OK. Probably you don't. Not that you care. OK, so-- OK, let's look at a simple example. So here's the model. As I said, we're looking at observations on the real line. And they're distributed according to some n theta 1. So I don't care about the variance. I know it's 1. And it's indexed by theta in the real line. OK, so this is-- the only thing I need to figure out is, what is the mean of those guys, OK? Now, I have this n observations. And if you actually remember from your probability class, are you familiar with the concept of joint density? You have multivariate observations. The joint density of independent random variables is just a product of their individual densities. So really, when I look at the product from i equal 1 to n of f theta of xi, this is really the joint density of the vector-- well, let me not use the word vector-- of x1 xn, OK? So if I take the product of density, is it still a density? And it's actually-- but this time on the r to the n. And so now what this thing is telling me-- so think of it in r2, right? So this is the joint density of two Gaussians. So it's something that looks like some bell-shaped curve in two dimensions. And it's centered at the value theta theta. OK, they both have the mean theta. So let's assume for one second-- it's going to be hard for me to make pictures in n dimensions. Actually, already in two dimensions, I can promise you that it's not very easy. So I'm actually just going to assume that n is equal to 1 for the sake of illustration. OK, so now I have this data. And now I have one observation, OK? And I know that the f theta looks like this. And what I'm doing is I'm actually looking at the value of x theta as my observation. Let's call it x1. Now, my principal tells me, just find the theta that makes this guy the most likely. What is the likelihood of my x1? Well, it's just the value of the function. That this value here. And if I wanted to find the most likely theta that had generated this x1, what I would need to do is to shift this thing and put it here. And so my estimate, my maximum likelihood estimator here would be theta is equal to x1, OK? That would be just the observation. Because if I have only one observation, what else am I going to do? OK, and so it sort of makes sense. And if you have more observations, you can think of it this way, as if you had more observations. So now I have, say, K observations, or n observations. And what I do is that I look at the value for each of these guys. So this value, this value, this value, this value. I take their product and I make this thing large. OK, why do I take the product? Well, because I'm trying to maximize their value all together, and I need to just turn it into one number that I can maximize. And taking the product is the natural way of doing it, either by motivating it by the KL principle or motivating it by maximizing the joint density, rather than just maximizing anything. OK, so that's why, visually, this is the maximum likelihood. It just says that if my observations are here, then this guy, this mean theta, is more likely than this guy. Because now if I look at the value of the function for this guy-- if I look at theta being this thing, then this is a very small value. Very small value, very small value, very small value. Everything gets a super small value, right? That's just the value that it gets in the tail here, which is very close to 0. But as soon as I start covering all my points with my bell-shaped curve, then all the values go up. All right, so I just want to make a short break into statistics, and just make sure that the maximum likelihood principle involves maximizing a function. So I just want to make sure that we're all on par about how do we maximize functions. In most instances, it's going to be a one-dimensional function, because theta is going to be a one-dimensional parameter. Like here it's the real line. So it's going to be easy. In some cases, it may be a multivariate function and it might be more complicated. OK, so let's just make this interlude. So the first thing I want you to notice is that if you open any book on what's called optimization, which basically is the science behind optimizing functions, you will talk mostly-- I mean, I'd say 99.9% of the cases will talk about minimizing functions. But it doesn't matter, because you can just flip the function and you just put a minus sign, and minimizing h is the same as maximizing minus h and the opposite, OK? So for this class, since we're only going to talk about maximum likelihood estimation, we will talk about maximizing functions. But don't be lost if you decide suddenly to open a book on optimization and find only something about minimizing functions. OK, so maximizing an arbitrary function can actually be fairly difficult. If I give you a function that has this weird shape, right-- let's think of this polynomial for example-- and I wanted to find the maximum, how would we do it? So what is the thing you've learned in calculus on how to maximize the function? Set the derivative equal to 0. Maybe you want to check the second derivative to make sure it's a maximum and not a minimum. But the thing is, this is only guaranteeing to you that you have a local one, right? So if I do it for this function, for example, then this guy is going to satisfy this criterion, this guy is going to satisfy this criterion, this guy is going to satisfy this criterion, this guy here, and this guy satisfies the criterion, but not the second derivative one. So I have a lot of candidates. And if my function can be really anything, it's going to be difficult, whether it's analytically by taking derivatives and setting them to 0, or trying to find some algorithms to do this. Because if my function is very jittery, then my algorithm basically has to check all candidates. And if there's a lot of them, it might take forever, OK? So this is-- I have only one, two, three, four, five candidates to check. But in practice, you might have a million of them to check. And that might take forever. OK, so what's nice about statistical models, and one of the things that makes all these models particularly robust, and that we still talk about them 100 years after they've been introduced is that the functions that-- the likelihoods that they lead for us to maximize are actually very simple. And they all share a nice property, which is that of being concave. All right, so what is a concave function? Well, by definition, it's just a function for which-- let's think of it as being twice differentiable. You can define functions that are not differentiable as being concave, but let's think about it as having a second derivative. And so if you look at the function that has a second derivative, concave are the functions that have their second derivative that's negative everywhere. Not just at the maximum, everywhere, OK? And so if it's strictly concave, this second derivative is actually strictly less than zero. And particularly if I think of a linear function, y is equal to x, then this function has its second derivative which is equal to zero, OK? So it is concave. But it's not strictly concave, OK? If I look at the function which is negative x squared, what is its second derivative? Minus 2. So it's strictly negative everywhere, OK? So actually, this is a pretty canonical example strictly concave function. If you want to think of a picture of a strictly concave function, think of negative x squared. So parabola pointing downwards. OK, so we can talk about strictly convex functions. So convex is just happening when the negative of the function is concave. So that translates into having a second derivative which is either non-negative or positive, depending on whether you're talking about convexity or strict convexity. But again, those convex functions are convenient when you're trying to minimize something. And since we're trying to maximize the function, we're looking for concave. So here are some examples. Let's just go through them quickly. OK, so the first one is-- so here I made my life a little uneasy by talking about the functions in theta, right? I'm talking about likelihoods, right? So I'm thinking of functions where the parameter is theta. So I have h of theta. And so if I start with theta squared, negative theta squared, then as we said, h prime prime of theta, the second derivative is minus 2, which is strictly negative, so this function is strictly concave. OK, another function is h of theta, which is-- what did we pick-- square root of theta. What is the first derivative? 1/2 square root of theta. What is the second derivative? So that's theta to the negative 1/2. So I'm just picking up another negative 1/2, so I get negative 1/4. And then I get theta to the 3/4 downstairs, OK? Sorry, 3/2. And that's strictly negative for theta, say, larger than 0. And I really need to have this thing larger than 0 so that it's well-defined. But strictly larger than 0 is so that this thing does not blow up to infinity. And it's true. If you think about this function, it looks like this. And already, the first derivative to infinity at 0. And it's a concave function, OK? Another one is the log, of course. What is the derivative of the log? That's 1 over theta, where h prime of theta is 1 over theta. And the second derivative negative 1 over theta squared, which again, is negative if theta is strictly positive. Here I define it as-- I don't need to define it to be strictly positive here, but I need it for the log. And sine. OK, so let's just do one more. So h of theta is sine of theta. But here I take it only on an interval, because you want to think of this function as pointing always downwards. And in particular, you don't want this function to have an inflection point. You don't want it to go down and then up and then down and then up, because this is not concave. And so sine is certainly going up and down, right? So what we do is we restrict it to an interval where sine is actually-- so what does the sine function looks like at 0, 0? And it's going up. Where is the first maximum of the sine? STUDENT: [INAUDIBLE] PROFESSOR: I'm sorry. STUDENT: Pi over 2. PROFESSOR: Pi over 2, where it takes value 1. And then it goes down again. And then that's at pi. And then I go down again. And here you see I actually start changing my inflection. So what we do is we stop it at pi. And we look at this function, it certainly looks like a parabola pointing downwards. And so if you look at the-- you can check that it actually works with the derivatives. So the derivative of sine is cosine. And the derivative of cosine is negative sine. OK, and this thing between 0 and pi is actually positive. So this entire thing is going to be negative. OK? And you know, I can come up with a lot of examples, but let's just stick to those. There's a linear function, of course. And the find function is going to be concave, but it's actually going to be convex as well, which means that it's certainly not going to be strictly concave or convex, OK? So here's your standard picture. And here, if you look at the dotted line, what it tells me is that a concave function, and the property we're going to be using is that if a strictly concave function has a maximum, which is not always the case, but if it has a maximum, then it actually must be-- sorry, a local maximum, it must be a global maximum. OK, so just the fact that it goes up and down and not again means that there's only global maximum that can exist. Now if you looked, for example, at the square root function, look at the entire positive real line, then this thing is never going to attain a maximum. It's just going to infinity as x goes to infinity. So if I wanted to find the maximum, I would have to stop somewhere and say that the maximum is attained at the right-hand side. OK, so that's the beauty about convex functions or concave functions, is that essentially, these functions are easy to maximize. And if I tell you a function is concave, you take the first derivative, set it equal to 0. If you find a point that satisfies this, then it must be a global maximum, OK? STUDENT: What if your set theta was [INAUDIBLE] then couldn't you have a function that, by the definition, is concave, with two upside down parabolas at two disjoint intervals, but yet it has two global maximums? PROFESSOR: So you won't get them-- so you want the function to be concave on what? On the convex cell of the intervals? Or you want it to be-- STUDENT: [INAUDIBLE] just said that any subset. PROFESSOR: OK, OK. You're right. So maybe the definition-- so you're pointing to a weakness in the definition. Let's just say that theta is a convex set and then you're good, OK? So you're right. Since I actually just said that this is true only for theta, I can just take pieces of concave functions, right? I can do this, and then the next one I can do this, on the next one I can do this. And then I would have a bunch of them. But what I want is think of it as a global function on some convex set. You're right. So think of theta as being convex for this guy, an interval, if it's a real line. OK, so as I said, for more generally-- so we can actually define concave functions more generally in higher dimensions. And that will be useful if theta is not just one parameter but several parameters. And for that, you need to remind yourself of Calculus II, and you have generalization of the notion of derivative, which is called a gradient, which is basically a vector where each coordinate is just the partial derivative with respect to each coordinate of theta. And the Hessian is the matrix, which is essentially a generalization of the second derivative. I denote it by nabla squared, but you can write it the way you want. And so this matrix here is taking as entry the second partial derivatives of h with respect to theta i and theta j. And so that's the ij-th entry. Who has never seen that? OK. So now, being concave here is essentially generalizing, saying that a vector is equal to zero. Well, that's just setting the vector-- sorry. The first order condition to say that it's a maximum is going to be the same. Saying that a function has a gradient equal to zero is the same as saying that each of its coordinates are equal to zero. And that's actually going to be a condition for a global maximum here. So to check convexity, we need to see that a matrix itself is negative. Sorry, to check concavity, we need to check that a matrix is negative. And there is a notion among matrices that compare matrix to zero, and that's exactly this notion. You pre- and post-multiply by the same x. So that works for symmetric matrices, which is the case here. And so you pre-multiply by x, post-multiply by the same x. So you have your matrix, your Hessian here. It's a d by d matrix if you have a d-dimensional matrix. So let's call it-- OK. And then here I pre-multiply by x transpose. I post-multiply by x. And this has to be non-positive if I want it to be concave, and strictly negative if I want it to be strictly concave. OK, that's just a real generalization. You can check for yourself that this is the same thing. If I were in dimension 1, this would be the same thing. Why? Because in dimension 1, pre- and post-multiplying by x is the same as multiplying by x squared. Because in dimension 1, I can just move my x's around, right? And so that would just mean the first condition would mean in dimension 1 that the second derivative times x squared has to be less than or equal to zero. So here I need this for all x's that are not zero, because I can take x to be zero and make this equal to zero, right? So this is for x's that are not equal to zero, OK? And so some examples. Just look at this function. So now I have functions that depend on two parameters, theta1 and theta2. So the first one is-- so if I take theta to be equal to-- now I need two parameters, r squared. And I look at the function, which is h of theta. Can somebody tell me what h of theta is? STUDENT: [INAUDIBLE] PROFESSOR: Minus 2 theta2 squared? OK, so let's compute the gradient of h of theta. So it's going to be something that has two coordinates. To get the first coordinate, what do I do? Well, I take the derivative with respect to theta1, thinking of theta2 as being a constant. So this thing is going to go away. And so I get negative 2 theta1. And when I take the derivative with respect to the second part, thinking of this part as being constant, I get minus 4 theta2. That clear for everyone? That's just the definition of partial derivatives. And then if I want to do the Hessian, so now I'm going to get a 2 by 2 matrix. The first guy here, I take the first-- so this guy I get by taking the derivative of this guy with respect to theta1. So that's easy. So that's just minus 2. This guy I get by taking derivative of this guy with respect to theta2. So I get what? Zero. I treat this guy as being a constant. This guy is also going to be zero, because I take the derivative of this guy with respect to theta1. And then I take the derivative of this guy with respect to theta2, so I get minus 4. OK, so now I want to check that this matrix satisfies x transpose-- this matrix x is negative. So what I do is-- so what is x transpose x? So if I do x transpose delta squared h theta x, what I get is minus 2 x1 squared minus 4 x2 squared. Because this matrix is diagonal, so all it does is just weights the square of the x's. So this guy is definitely negative. This guy is negative. And actually, if one of the two is non-zero, which means that x is non-zero, then this thing is actually strictly negative. So this function is actually strictly concave. And it looks like a parabola that's slightly distorted in one direction. So well, I know this might have been some time ago. Maybe for some of you might have been since high school. So just remind yourself of doing second derivatives and Hessians and things like this. Here's another one as an exercise. h is minus theta1 minus theta2 squared. So this one is going to actually not be diagonal. The Hessian is not going to be diagonal. Who would like to do this now in class? OK, thank you. This is not a calculus class. So you can just do it as a calculus exercise. And you can do it for log as well. Now, there is a nice recipe for concavity that works for the second one and the third one. And the thing is, if you look at those particular functions, what I'm doing is taking, first of all, a linear combination of my arguments. And then I take a concave function of this guy. And this is always going to work. This is always going to give me a complete function. So the computations that I just made, I actually never made them when I prepared those slides because I don't have to. I know that if I take a linear combination of those things and then I take a concave function of this guy, I'm always going to get a concave function. OK, so that's an easy way to check this, or at least as a sanity check. All right, and so as I said, finding maximizers of concave or strictly concave function is the same as it was in the one-dimensional case. What I do-- sorry, in the one-dimensional case, we just agreed that we just take the derivative and set it to zero. In the high dimensional case, we take the gradient and set it equal to zero. Again, that's calculus, all right? So it turns out that so this is going to give me equations, right? The first one is an equation in theta. The second one is an equation in theta1, theta2, theta3, all the way to theta d. And it doesn't mean that because I can write this equation that I can actually solve it. This equation might be super nasty. It might be like some polynomial and exponentials and logs equal zero, or some crazy thing. And so there's actually, for a concave function, since we know there's a unique maximizer, there's this theory of convex optimization, which really, since those books are talking about minimizing, you had to find some sort of direction. But you can think of it as the theory of concave maximization. And they allow you to find algorithms to solve this numerically and fairly efficiently. OK, that means fast. Even if d is of size 10,000, you're going to wait for one second and it's going to tell you what the maximum is. And that's what machine learning is about. If you've taken any class on machine learning, there's a lot of optimization, because they have really, really big problems to solve. Often in this class, since this is more introductory statistics, we will have a close form. For the maximum likelihood estimator will be saying theta hat equals, and say x bar, and that will be the maximum likelihood estimator. So just why-- so has anybody seen convex optimization before? So let me just give you an intuition why those functions are easy to maximize or to minimize. In one dimension, it's actually very easy for you to see that. And the reason is this. If I want to maximize the concave function, what I need to do is to be able to query a point and get as an answer the derivative of this function, OK? So now I said this is the function I want to optimize, and I've been running my algorithm for 5/10 of a second. And it's at this point here. OK, that's the candidate. Now, what I can ask is, what is the derivative of my function here? Well, it's going to give me a value. And this value is going to be either negative, positive, or zero. Well, if it's zero, that's great. That means I'm here and I can just go home. I've solved my problem. I know there's a unique maximum, and that's what I wanted to find. If it's positive, it actually tells me that I'm on the left of the optimizer. And on the left of the optimal value. And if it's negative, it means that I'm at the right of the value I'm looking for. And so most of the convex optimization methods basically tell you, well, if you query the derivative and it's actually positive, move to the right. And if it's negative, move to the left. Now, by how much you move is basically, well, why people write books. And in higher dimension, it's a little more complicated, because in higher dimension, thinks about two dimensions, then I'm only being able to get in a vector. And the vector is only telling me, well, here is half of the space in which you can move. Now here, if you tell me move to the right, I know exactly which direction I'm going to have to move. But in two dimension, you're going to basically tell me, well, move in this global direction. And so, of course, I know there's a line on the floor I cannot move behind. But even if you tell me, draw a line on the floor and move only to that side of the line, then there's many directions in that line that I can go to. And that's also why there's lots of things you can do in optimization. OK, but still, putting this line on the floor is telling me, do not go backwards. And that's very important. It's just telling you which direction I should be going always, OK? All right, so that's what's behind this notion of gradient descent algorithm, steepest descent. Or steepest descent, actually, if we're trying to maximize. OK, so let's move on. So this course is not about optimization, all right? So as I said, the likelihood was this guy. The product of f of the xi's. And one way you can do this is just basically the joint distribution of my data at the point theta. So now the likelihood, formerly-- so here I am giving myself the model e theta. And here I'm going to assume that e is discrete so that I can talk about PMFs. But everything you're doing, just redo for the sake of yourself by replacing PMFs by PDFs, and everything's going to be fine. We'll do it in a second. All right, so the likelihood of the model. So here I'm not looking at the likelihood of a parameter. I'm looking at the likelihood of a model. So it's actually a function of the parameter. And actually, I'm going to make it even a function of the points x1 to xn. All right, so I have a function. And what it takes as input is all the points x1 to xn and a candidate parameter theta. Not the true one. A candidate. And what I'm going to do is I'm going to look at the probability that my random variables under this distribution, p theta, take these exact values, px1, px2, pxn. Now remember, if my data was independent, then I could actually just say that the probability of this intersection is just a product of the probabilities. And it would look something like this. But I can define likelihood even if I don't have independent random variables. But think of them as being independent, because that's all we're going to encounter in this class, OK? I just want you to be aware that if I had dependent variables, I could still define the likelihood. I would have to understand how to compute these probabilities there to be able to compute it. OK, so think of Bernoullis, for example. So here is my example of a Bernoulli. So my parameter is-- so my model is 0,1 Bernoulli p. p is in the interval 0,1. The probability, just as a side remark, I'm just going to use the fact that I can actually write the PMF of a Bernoulli in a very concise form, right? If I ask you what the PMF of a Bernoulli is, you could tell me, well, the probability that x-- so under p, the probability that x is equal to 0 is 1 minus p. The probability under p that x is equal to 1 is equal to p. But I can be a bit smart and say that for any X that's either 0 or 1, the probability under p that X is equal to little x, I can write it in a compact form as p to the X, 1 minus p to the 1 minus x. And you can check that this is the right form because, well, you have to check it only for two values of X, 0 and 1. And if you plug in 1, you only keep the p. If you plug in 0, you only keep the 1 minus p. And that's just a trick, OK? I could have gone with many other ways. Agreed? I could have said, actually, something like-- another one would be-- which we are not going to use, but we could say, well, it's xp plus and minus x 1 minus p, right? That's another one. But this one is going to be convenient. So forget about this guy for a second. So now, I said that the likelihood is just this function that's computing the probability that X1 is equal to little x1. So likelihood is L of X1, Xn. So let me try to make those calligraphic so you know that I'm talking about smaller values, right? Small x's. x1, xn, and then of course p. Sometimes we even put-- I didn't do it, but sometimes you can actually put a semicolon here, semicolon so you know that those two things are treated differently. And so now you have this thing is equal to what? Well, it's just the probability under p that X1 is little x1 all the way to Xn is little xn. OK, that's just the definition. All right, so now let's start working. So we write the definition, and then we want to make it look like something we would potentially be able to maximize if I were-- like if I take the derivative of this with respect to p, it's not very helpful because I just don't know. Just want the algebraic function of p. So this thing is going to be equal to what? Well, what is the first thing I want to use? I have a probability of an intersection of events, so it's just the product of the probabilities. So this is the product from i equal 1 to n of P, small p, Xi is equal to little xi. That's independence. OK, now, I'm starting to mean business, because for each P, we have a closed form, right? I wrote this as this supposedly convenient form. I still have to reveal to you why it's convenient. So this thing is equal to-- well, we said that that was p xi for a little xi. 1 minus p to the 1 minus xi, OK? So that was just what I wrote over there as the probability that Xi is equal to little xi. And since they all have the same parameter p, just have this p that shows up here. And so now I'm just taking the products of something to the xi, so it's this thing to the sum of the xi's. Everybody agrees with this? So this is equal to p sum of the xi, 1 minus p to the n minus sum of the xi. If you don't feel comfortable with writing it directly, you can observe that this thing here is actually equal to p over 1 minus p to the xi times 1 minus p, OK? So now when I take the product, I'm getting the products of those guys. So it's just this guy to the power of sum and this guy to the power n. And then I can rewrite it like this if I want to And so now-- well, that's what we have here. And now I am in business because I can still hope to maximize this function. And how to maximize this function? All I have to do is to take the derivative. Do you want to do it? Let's just take the derivative, OK? Sorry, I didn't tell you that, well, the maximum likelihood principle is to just maxim-- the idea is to maximize this thing, OK? But I'm not going to get there right now. OK, so let's do it maybe for the Poisson model for a second. So if you want to do it for the Poisson model, let's write the likelihood. So right now I'm not doing anything. I'm not maximizing. I'm just computing the likelihood function. OK, so the likelihood function for Poisson. So now I know-- what is my sample space for Poisson? STUDENT: Positives. PROFESSOR: Positive integers. And well, let me write it like this. Poisson lambda, and I'm going to take lambda to be positive. And so that means that the probability under lambda that X is equal to little x in the sample space is lambda to the X over factorial x e to the minus lambda. So that's basically the same as the compact form that I wrote over there. It's just now a different one. And so when I want to write my likelihood, again, we said little x's. This is equal to what? Well, it's equal to the probability under lambda that X1 is little x1, Xn is little xn, which is equal to the product. OK? Just by independence. And now I can write those guys as being-- each of them being i equal 1 to n. So this guy is just this thing where a plug in Xi. So I get lambda to the Xi divided by factorial xi times e to the minus lambda, OK? And now, I mean, this guy is going to be nice. This guy is not going to be too nice. But let's write it. When I'm going to take the product of those guys here, I'm going to pick up lambda to the sum of the xi's. Here I'm going to pick up exponential minus n times lambda. And here I'm going to pick up just the product of the factorials. So x1 factorial all the way to xn factorial. Then I get lambda, the sum of the xi. Those are little xi's. e to the minus xn lambda. OK? So that might be freaky at this point, but remember, this is a function we will be maximizing. And the denominator here does not depend on lambda. So we knew that maximizing this function with this denominator, or any other denominator, including 1, will give me the same arg max. So it won't be a problem for me. As long as it does not depend on lambda, this thing is going to go away. OK, so in the continuous case, the likelihood I cannot-- right? So if I would write the likelihood like this in the continuous case, this one would be equal to what? Zero, right? So it's not very helpful. And so what we do is we define the likelihood as the product of the f of theta xi. Now that would be a jump if I told you, well, just define it like that and go home and don't discuss it. But we know that this is exactly what's coming from the-- well, actually, I think I erased it. It was just behind. So this was exactly what was coming from the KL divergence estimated, right? The thing that I showed you, if we want to follow this strategy, which consists in estimating the KL divergence and minimizing it, is exactly doing this. So in the Gaussian case-- well, let's write it. So in the Gaussian case, let's see what the likelihood looks like. OK, so if I have a Gaussian experiment here-- did I actually write it? OK, so I'm going to take mu and sigma as being two parameters. So that means that my sample space is going to be what? Well, my sample space is still R. Those are just my observations. But then I'm going to have a N mu sigma squared. And the parameters of interest are mu and R. And sigma squared and say 0 infinity. OK, so that's my Gaussian model. Yes. STUDENT: [INAUDIBLE] PROFESSOR: No, there's no-- I mean, there's no difference. STUDENT: [INAUDIBLE] PROFESSOR: Yeah. I think the all the slides I put the curly bracket, then I'm just being lazy. I just like those concave parenthesis. All right, so let's write it. So the definition, L xi, xn. And now I have two parameters, mu and sigma squared. We said, by definition, is the product from i equal 1 to n of f theta of little xi. Now, think about it. Here we always had an extra line, right? The line was to say that the definition was the probability that they were all equal to each other. That was the joint probability. And here it could actually have a line that says it's the joint probability distribution of the xi's. And if it's not independent, it's not going to be the product. But again, since we're only dealing with independent observations in the scope of this class, this is the only definition we're going to be using. OK, and actually, from here on, I will literally skip this step when I talk about discrete ones as well, because they are also independent. Agreed? So we start with this, which we agreed was the definition for this particular case. And so now all of you know by heart what the density of a-- sorry, that's not theta. I should write it mu sigma squared. And so you need to understand what this density. And it's product of 1 over sigma square root 2 pi times exponential minus xi minus mu squared divided by 2 sigma squared. OK, that's the Gaussian density with parameters mu and sigma squared. I just plugged in this thing which I don't give you, so you just have to trust me. It's all over any book. Certainly, I mean, you can find it. I will give it to you. And again, you're not expected to know it by heart. Though, if you do your homework every week without wanting to, you will definitely use some of your brain to remember that thing. OK, and so now, well, I have this constant in front. 1 over sigma square root 2 pi that I can pull out. So I get 1 over sigma square root 2 pi to the power n. And then I have the product of exponentials, which we know is the exponential of the sum. So this is equal to exponential minus. And here I'm going to put the 1 over 2 sigma squared outside the sum. And so that's how this guy shows up. Just the product of the density is evaluated at, respectively, x1 to xn. OK, any questions about computing those likelihoods? Yes. STUDENT: Why [INAUDIBLE] PROFESSOR: Oh, that's a typo. Thank you. Because I just took it from probably the previous thing. So those are actually-- should be-- OK, thank you for noting that one. So this line should say for any x1 to xn in R to the n. Thank you, good catch. All right, so that's really e to the n, right? My sample space always. OK, so what is maximum likelihood estimation? Well again, if you go back to the estimate that we got, the estimation strategy, which consisted in replacing expectation with respect to theta star by average of the data in the KL divergence, we would try to maximize not this guy, but this guy. The thing that we actually plugged in were not any small xi's. Were actually-- the random variable is capital Xi. So the maximum likelihood estimator is actually taking the likelihood, which is a function of little x's, and now the values at which it estimates, if you look at it, is actually-- the capital X is my data. So it looks at the function, at the data, and at the parameter theta. That's what the-- so that's the first thing. And then the maximum likelihood estimator is maximizing this, OK? So in a way, what it does is it's a function that couples together the data, capital X1 to capital Xn, with the parameter theta and just now tries to maximize it. So if this is just a little hard for you to get, the likelihood is formally defined as a function of x, right? Like when I write f of x. f of little x, I define it like that. But really, the only x arguments we're going to evaluate this function at are always the random variable, which is the data. So if you want, you can think of it as those guys being not parameters of this function, but really, random variables themselves directly. Is there any question? STUDENT: [INAUDIBLE] those random variables [INAUDIBLE]?? PROFESSOR: So those are going to be known once you have-- so it's always the same thing in stats. You first design your estimator as a function of random variables. And then once you get data, you just plug it in. But we want to think of them as being random variables because we want to understand what the fluctuations are. So we're going to keep them as random variables for as long as we can. We're going to spit out the estimator as a function of the random variables. And then when we want to compute it from data, we're just going to plug it in. So keep the random variables for as long as you can. Unless I give you numbers, actual numbers, just those are random variables. OK, so there might be some confusion if you've seen any stats class, sometimes there's a notation which says, oh, the realization of the random variables are lower case versions of the original random variables. So lowercase x should be thought as the realization of the upper case X. This is not the case here. When I write this, it's the same way as I write f of x is equal to x squared, right? It's just an argument of a function that I want to define. So those are just generic x. So if you correct the typo that I have, this should say that this should be for any x and xn. I'm just describing a function. And now the only place at which I'm interested in evaluating that function, at least for those first n arguments, is at the capital N observations random variables that I have. So there's actually texts, there's actually people doing research on when does the maximum likelihood estimator exist? And that happens when you have infinite sets, thetas. And this thing can diverge. There is no global maximum. There's crazy things that might happen. And so we're actually always going to be in a case where this maximum likelihood estimator exists. And if it doesn't, then it means that you actually need to restrict your parameter space, capital Theta, to something smaller. Otherwise it won't exist. OK, so another thing is the log likelihood estimator. So it is still the likelihood estimator. We solved before that maximizing a function or maximizing log of this function is the same thing, because the log function is increasing. So the same thing is maximizing a function or maximizing, I don't know, exponential of this function. Every time I take an increasing function, it's actually the same thing. Maximizing a function or maximizing 10 times this function is the same thing. So the function x maps to 10 times x is increasing. And so why do we talk about log likelihood rather than likelihood? So the log of likelihood is really just-- I mean the log likelihood is the log of the likelihood. And the reason is exactly for this kind of reasons. Remember, that was my likelihood, right? And I want to maximize it. And it turns out that in stats, there's a lot of distributions that look like exponential of something. So I might as well just remove the exponential by taking the log. So once I have this guy, I can take the log. This is something to a power of something. If I take the log, it's going to look better for me. I have this thing-- well, I have another one somewhere, I think, where I had the Poisson. Where was the Poisson? The Poisson's gone. So the Poisson was the same thing. If I took the log, because it had a power, that would make my life easier. So the log doesn't have any particular intrinsic notion, except that it's just more convenient. Now, that being said, if you think about maximizing the KL, the original formulation, we actually remove the log. If we come back to the KL thing-- where is my KL? Sorry. That was maximizing the sum of the logs of the pi's. And so then we worked at it by saying that the sum of the logs was-- maximizing the sum of the logs was the same as maximizing the product. But here, we're basically-- log likelihood is just going backwards in this chain of equivalences. And that's just because the original formulation was already convenient. So we went to find the likelihood and then coming back to our original estimation strategy. So look at the Poisson. I want to take log here to make my sum of xi's go down. OK, so this is my estimator. So the log of L-- so one thing that you want to notice is that the log of L of x1, xn theta, as we said, is equal to the sum from i equal 1 to n of the log of either p theta of xi, or-- so that's in the discrete case. And in the continuous case is the sum of the log of f theta of xi. The beauty of this is that you don't have to really understand the difference between probability mass function and probability distribution function to implement this. Whatever you get, that's what you plug in. Any questions so far? All right, so shall we do some computations and check that, actually, we've introduced all this stuff-- complicate functions, maximizing, KL divergence, lot of things-- so that we can spit out, again, averages? All right? That's great. We're going to able to sleep at night and know that there's a really powerful mechanism called maximum likelihood estimator that was actually driving our intuition without us knowing. OK, so let's do this so. Bernoulli trials. I still have it over there. OK, so actually, I don't know what-- well, let me write it like that. So it's P over 1 minus P xi-- sorry, sum of the xi's times 1 minus P is to the n. So now I want to maximize this as a function of P. Well, the first thing we would want to do is to check that this function is concave. And I'm just going to ask you to trust me on this. So I don't want-- sorry, sum of the xi's. I only want to take the derivative and just go home. So let's just take the derivative of this with respect to P. Actually, no. This one was more convenient. I'm sorry. This one was slightly more convenient, OK? So now we have-- so now let me take the log. So if I take the log, what I get is sum of the xi's times log p plus n minus some of the xi's times log 1 minus p. Now I take the derivative with respect to p and set it equal to zero. So what does that give me? It tells me that sum of the xi's divided by p minus n sum of the xi's divided by 1 minus p is equal to 0. So now I need to solve for p. So let's just do it. So what we get is that 1 minus p sum of the xi's is equal to p n minus sum of the xi's. So that's p times n minus sum of the xi's plus sum of the xi's. So let me put it on the right. So that's p times n is equal to sum of the xi's. And that's equivalent to p-- actually, I should start by putting p hat from here on, because I'm already solving an equation, right? And so p hat is equal to syn of the xi's divided by n, which is my xn bar. Poisson model, as I said, Poisson is gone. So let me rewrite it quickly. So Poisson, the likelihood in X1, Xn, and lambda was equal to lambda to the sum of the xi's e to the minus n lambda divided by X1 factorial, all the way to Xn factorial. So let me take the log likelihood. That's going to be equal to what? It's going to tell me. It's going to be-- well, let me get rid of this guy first. Minus log of X1 factorial all the way to Xn factorial. That's a constant with respect to lambda. So when I'm going to take the derivative, it's going to go. Then I'm going to have plus sum of the xi's times log lambda. And then I'm going to have minus n lambda. So now then, you take the derivative and set it equal to zero. So log L-- well, partial with respect to lambda of log L, say lambda, equals zero. This is equivalent to, so this guy goes. This guy gives me sum of the xi's divided by lambda hat equals n. And so that's equivalent to lambda hat is equal to sum of the xi's divided by n, which is Xn bar. Take derivative, set it equal to zero, and just solve. It's a very satisfying exercise, especially when you get the average in the end. You don't have to think about it forever. OK, the Gaussian model I'm going to leave to you as an exercise. Take the log to get rid of the pesky exponential, and then take the derivative and you should be fine. It's a bit more-- it might be one more line than those guys. OK, so-- well actually, you need to take the gradient in this case. Don't check the second derivative right now. You don't have to really think about it. What did I want to add? I think there was something I wanted to say. Yes. When I have a function that's concave and I'm on, like, some infinite interval, then it's true that taking the derivative and setting it equal to zero will give me the maximum. But again, I might have a function that looks like this. Now, if I'm on some finite interval-- let me go elsewhere. So if I'm on some finite interval and my function looks like this as a function of theta-- let's say this is my log likelihood as a function of theta-- then, OK, there's no place in this interval-- let's say this is between 0 and 1-- there's no place in this interval where the derivative is equal to 0. And if you actually try to solve this, you won't find a solution which is not in the interval 0, 1. And that's actually how you know that you probably should not take the derivative equal to zero. So don't panic if you get something that says, well, the solution is at infinity, right? If this function keeps going, you will find that the solution-- you won't be able to find a solution apart from infinity. You are going to see something like 1 over theta hat is equal to 0, or something like this. So you know that when you've found this kind of solution, you've probably made a mistake at some point. And the reason is because the functions that are like this, you don't find the maximum by setting the derivative equal to zero. You actually just find the maximum by saying, well, it's an increasing function on the interval 0, 1, so the maximum must be attained at 1. So here in this case, that would mean that my maximum would be 1. My estimator would be 1, which would be weird. So typically here, you have a function of the xi's. So one example that you will see many times is when this guy is the maximum of the xi's. And in which case, the maximum is attained here, which is the maximum of this. OK, so just keep in mind-- what I would recommend is every time you're trying to take the maximum of a function, just try to plot the function in your head. It's not too complicated. Those things are usually squares, or square roots, or logs. You know what those functions look like. Just plug them in your mind and make sure that you will find a maximum which really goes up and then down again. If you don't, then that means your maximum is achieved at the boundary and you have to think differently to get it. So the machinery that consists in setting the derivative equal to zero works 80% of the time. But o you have to be careful. And from the context, it will be clear that you had to be careful, because you will find some crazy stuff, such as solve 1 over theta hat is equal to zero. All right, so before we conclude, I just wanted to give you some intuition about how does the maximum likelihood perform? So there's something called the Fisher information that essentially controls how this thing performs. And the Fisher information is, essentially, a second derivative or a Hessian. So if I'm in a one-dimensional parameter case, it's a number, it's a second derivative. If I'm in a multidimensional case, it's actually a Hessian, it's a matrix. So I'm going to actually take in notation little curly L of theta to be the log likelihood, OK? And that's the log likelihood for one observation. So let's call it x generically, but think of it as being x1, for example. And I don't care of, like, summing, because I'm actually going to take expectation of this thing. So it's not going to be a data driven quantity I'm going to play with. So now I'm going to assume that it is twice differentiable, almost surely, because it's a random function. And so now I'm going to just sweep under the rug some technical conditions when these things hold. So typically, when can I permute integral and derivatives and this kind of stuff that you don't want to think about? OK, the rule of thumb is it always works until it doesn't, in which case, that probably means you're actually solving some sort of calculus problem. Because in practice, it just doesn't happen. So the Fisher information is the expectation of the-- that's called the outer product. So that's the product of this gradient and the gradient transpose. So that forms a matrix, right? That's a matrix minus the outer product of the expectations. So that's really what's called the covariance matrix of this vector, nabla of L theta, which is a random vector. So I'm forming the covariance matrix of this thing. And the technical conditions tells me that, actually, this guy, which depends only on the Hessian, is actually equal to negative expectation of the-- sorry. It depends on the gradient. Is actually negative expectation of the Hessian. So I can actually get a quantity that depends on the second derivatives only using first derivatives. But the expectation is going to play a role here. And the fact that it's a log. And lots of things actually show up here. And so in this case, what I get is that-- so in the one-dimensional case, then this is just the covariance matrix of a one-dimensional thing, which is just a variance of itself. So the variance of the derivative is actually equal to negative the expectation of the second derivative. OK, so we'll see that next time. But what I wanted to emphasize with this is that why do we care about this quantity? That's called the Fisher information. Fisher is the founding father of modern statistics. Why do we give this quantity his name? Well, it's because this quantity is actually very critical. What does the second derivative of a function tell me at the maximum? Well, it's telling me how curved it is, right? If I have a zero second derivative, I'm basically flat. And if I have a very high second derivative, I'm very curvy. And when I'm very curvy, what it means is that I'm very robust to the estimation error. Remember our estimation strategy, which consisted in replacing expectation by averages? If I'm extremely curvy, I can move a little bit. This thing, the maximum, is not going to move much. And this formula here-- so forget about the matrix version for a second-- is actually telling me exactly-- it's telling me the curvature is basically the variance of the first derivative. And so the more the first derivative fluctuates, the more your maximum is actually-- your org max is going to move all over the place. So this is really controlling how flat your likelihood, your log likelihood, is at its maximum. The flatter it is, the more sensitive to fluctuation the arg max is going to be. The curvier it is, the less sensitive it is. And so what we're hoping-- a good model is going to be one that has a large or small value for the Fisher information. I want this to be-- small? I want it to be large. Because this is the curvature, right? This number is negative, it's concave. So if I take a negative sign, it's going to be something that's positive. And the larger this thing, the more curvy it is. Oh, yeah, because it's the variance. Again, sorry. This is what-- OK. Yeah, maybe I should not go into those details because I'm actually out of time. But just spoiler alert, the asymptotic variance of your-- the variance, basically, as n goes to infinity of the maximum likelihood estimator is going to be 1 over this guy. So we want it to be large, because the asymptotic variance is going to be very small. All right, so we're out of time. We'll see that next week. And I have your homework with me. And I will actually turn it in. I will give it to you outside so we can let the other room come in. OK, I'll just leave you the--
MIT_18650_Statistics_for_Applications_Fall_2016
2_Introduction_to_Statistics_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: --of our limiting distribution, which happen to be Gaussian. But if the central limit theorem told us that the limiting distribution of some average was something that looked like a Poisson or an [? exponential, ?] then we would just have in the same way taken the quintiles of the exponential distribution. So let's go back to what we had. So generically if you have a set of observations X1 to Xn. So remember for the kiss example they were denoted by R1 to Rn, because they were turning the head to the right, but let's just go back. We say X1 to Xn, and in this case I'm going to assume they're IID, and I'm going to make them Bernoulli with [INAUDIBLE] p, and p is unknown, right? So what did we do from here? Well, we said p is the expectation of Xi, and actually we didn't even think about it too much. We said, well, if I need to estimate the proportion of people who turn their head to the right when they kiss, I just basically I'm going to compute the average. So our p hat was just Xn bar, which was just 1 over n sum from i over 1 2n of the Xi. The average of the observations was their estimate. And then we wanted to build some confidence intervals around this. So what we wanted to understand is, how much that this p hat fluctuates. This is a random variable. It's an average of random variables. It's a random variable, so we want to know what the distribution is. And if we know what the distribution is, then we actually know, well, where it fluctuates. What the expectation is. Around which value it tends to fluctuate et cetera. And so what the central limit theorem told us was if I take square root of n times Xn bar minus p, which is its average. And then I divide it by the standard deviation. Then this thing here converges as n goes to infinity, and we will say a little bit more about what it means in distribution to some standard normal random variable. So that was the central limit theorem. So what it means is that when I think of this as a random variable, when n is large enough it's going to look like this. And so I understand perfectly its fluctuations. I know that this thing here has-- I know the probability of being in this zone. I know that this number here is 0. I know a bunch of things. And then, in particular, what I was interested in was that the probability, that's the absolute value of a Gaussian random variable, exceeds q alpha over 2, q alpha over 2. We said that this was equal to what? Anybody? What was that? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Alpha, right? So that's the probability. That's my random variable. So this is by definition q alpha over 2 is the number. So that to the right of it is alpha over 2. And this is a negative q alpha over 2 by symmetry. And so the probability that i exceeds-- well, it's not very symmetric, but the probability that i exceeds this value, q alpha over 2, is just the sum of the two gray areas. All right? So now I said that this thing was approximately equal, due to the central limit theorem, to the probability, that square root of n. Xn bar minus p divided by square root p 1 minus p. Well, absolute value was larger than q alpha over 2. Well, then this thing by default is actually approximately equal to alpha, just because of virtue of the central limit theorem. And then we just said, well, I'll solve for p. Has anyone attempted to solve the degree two equation for p in the homework? Everybody has tried it? So essentially, this is going to be an equation in p. Sometimes we don't want to solve it. Some of the p's we will replace by their worst possible value. For example, we said one of the tricks we had was that this value here, square root of p 1 minus p, was always less than one half. Until we could actually get the confidence interval that was larger than all possible confidence intervals for all possible values of p, but we could solve for p. Do we all agree on the principle of what we did? So that's how you build confidence intervals. Now let's step back for a second, and see what was important in the building of this confidence interval. The really key thing is that I didn't tell you why I formed this thing, right? We started from x bar, and then I took some weird function of x bar that depended on p and n. And the reason is, because when I take this function, the central limit theorem tells me that it converges to something that I know. But this very important thing about the something that I know is that it does not depend on anything that I don't know. For example, if I forgot to divide by square root of p 1 minus p, then this thing would have had a variance, which is the p 1 minus p. If I didn't remove this p here, the mean here would have been affected by p. And there's no table for normal p 1. Yes? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Oh, so the square root of n terms come from. So really you should view this. So there's a rule and sort of a quiet rule in math that you don't write a divided by b over c, right? You write c times a divided by b, because it looks nicer. But the way you want to think about this is that this is x bar minus p divided by the square root of p 1 minus p divided by n. And the reason is, because this is actually the standard deviation of this-- oh sorry, x bar n. This is actually the standard deviation of this guy, and the square root of n comes from the [INAUDIBLE] average. So the key thing was that this thing, this limiting distribution did not depend on anything I don't know. And this is actually called a pivotal distribution. It's pivotal. I don't need anything. I don't need to know anything, and I can read it in a table. Sometimes there's going to be complicated things, but now we have computers. The beauty about Gaussian is that people have studied them to death, and you can open any stats textbook, and you will see a table again that will tell you for each value of alpha you're interested in, it will tell you what q alpha over 2 is. But there might be some crazy distributions, but as long as they don't depend on anything, we might actually be able to simulate from them, and in particular compute what q alpha over 2 is for any possible value [INAUDIBLE].. And so that's what we're going to be trying to do. Finding pivotal distributions. How do we take this Xn bar, which is a good estimate, and turn it into something which may be exactly or asymptotically does not depend on any unknown parameter. So here is one way we can actually-- so that's what we did for the kiss example, right? And here I mentioned, for example, in the extreme case, when n was equal to 3 we would get a different thing, but here the CLT would not be valid. And what that means is that my pivotal distribution is actually not the normal distribution, but it might be something else. And I said we can make take exact computations. Well, let's see what it is, right? If I have three observations, so I'm going to have X1, X2, X3. So now I take the average of those guys. OK, so that's my estimate. How many values can this guy take? It's a little bit of counting. Four values. How did you get to that number? OK, so each of these guys can take value 0, 1, right? So the number of values that it can take, I mean, it's a little annoying, because then I have to sum them, right? So basically, I have to count the number of 1's. So how many 1's can I get, right? Sorry I have to-- yeah, so this is the number of 1's that I-- OK, so let's look at that. So we get 0, 0, 0. 0, 0, 1. And then I get basically three of them that have just the one in there, right? So there's three of them. How many of them have exactly two 1's? 2. Sorry, 3, right? So it's just this guy where I replaced the 0's and the 1. OK, so now I get-- so here I get three that take the value 1, and one that gets the value 0. And then I get three that take the value 2, and then one that takes the value 1. The value [? 0 ?] 1's, right? OK, so everybody knows what I'm missing here is just the ones here where I replaced the 0's by 1's. So the number of values that this thing can take is 1, 2, 3, 4. So someone is counting much faster than me. And so those numbers, you've probably seen them before, right? 1, 3, 3, 1, remember? And so essentially those guys, it takes only three values, which are either 1/3, 1. Sorry, 1/3. Oh OK, so it's 0, sorry. 1/3, 2/3, and 1. Those are the four possible values you can take. And so now-- which is probably much easier to count like that-- and so now all I have to tell you if I want to describe the distribution of this probability of this random variable, is just the probability that it takes each of these values. So X bar 3 takes the value 0 probability that X bar 3 takes the value 1/3, et cetera. If I give you each of these possible values, then you will be able to know exactly what the distribution is, and hopefully maybe to turn it into something you can compute. Now the thing is that those values will actually depend on the unknown p. What is the unknown p here? What is the probability that X bar 3 is equal to 0 for example? I'm sorry? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, OK. So let's write it without making the computation So 1/8 is probably not the right answer, right? For example, if p is equal to 0, what is this probability? 1. If p is 1, what is this probability? 0. So it will depend on p. So the probability that this thing is equal to 0, is just the probability that all three of those guys are equal to 0. The probability that X1 is equal to 0, and X2 is equal to 0, and X3 is equal to 0. Now my things are independent, so I do what I actually want to do, which say the probability of the intersection is the product of the probabilities, right? So it's just the probability that each of them is equal to 0 to the power of 3. And the probability that each of them, or say one of them is equal to 0, is just 1 minus p. And then for this guy I just get the probability-- well, it's more complicated, because I have to decide which one it is. But those things are just the probability of some binomial random variables, right? This is just a binomial, X bar 3. So if I look at X bar 3, and then I multiply it by 3, it's just this sum of independent Bernoulli's with parameter p. So this is actually a binomial with parameter 3 and p. And there's tables for binomials, and they tell you all this. Now the thing is I want to invert this guy, right? Somehow. This thing depends on p. I don't like it, so I'm going to have to find ways to get this things depending on p, and I could make all these nasty computations, and spend hours doing this. But there's tricks to go around this. There's upper bounds. Just like we just said, well, maybe I don't want to solve the second degree equation in p, because it's just going to capture maybe smaller order terms, right? Things that maybe won't make a huge difference numerically. You can check that in your problem set one. Does it make a huge difference numerically to solve the second degree equation, or to just use the [INAUDIBLE] p 1 minus p or even to plug in p hat instead of p. Those are going to be the-- problem set one is to make sure that you see what magnitude of changes you get by changing from one method to the other. So what I wanted to go to is something where we can use something, which is just a little more brute force. So the probability that-- so here is this Hoeffding's inequality. We saw that. That's what we've finished on last time. So Hoeffding's inequality is actually one of the most useful inequalities. If any one of you is doing anything really to algorithms, you've seen that inequality before. It's extremely convenient that it tells you something about bounded random variables, and if you do algorithms typically with things bounded. And that's the case of Bernoulli's random variables, right? They're bounded between 0 and 1. And so when I do this thing, when I do Hoeffding's inequality, what this thing is telling me is for any given epsilon here, for any given epsilon, what is the probability that Xn bar goes away from its expectation? All right, then we saw that it decreases somewhat similarly to the way a Gaussian would look like. So essentially what Hoeffding's inequality is telling me, is that I have this picture, when I have a Gaussian with mean u, I know it looks like this, right? What Hoeffding's inequality is telling me is that if I actually take the average of some bounded random variables, then their probability distribution function or maybe math function-- this thing might not even have [INAUDIBLE] the density, but let's think of it as being a density just for simplicity-- it's going to be something that's going to look like this. It's going to be somewhat-- well, sometimes it's going to have to escape just for the sake of having integral 1. But it's essentially telling me that those guys stay below those guys. The probability that Xn bar exceeds mu is bounded by something that decays like to tail of Gaussian. So really that's the picture you should have in mind. When I average bounded random variables, I actually have something that might be really rugged. It might not be smooth like a Gaussian, but I know that it's always bounded by a Gaussian. And what's nice about it is that when I actually start computing probability that exceeds some number, say alpha over 2, then I know that this I can actually get a number, which is just-- sorry, the probability that it exceeds, yeah. So this number that I get here is actually going to be somewhat smaller, right? So that's going to be the q alpha over 2 for the Gaussian, and that's going to be the-- I don't know, r alpha over 2 for this [? Bernoulli ?] random variable. Like q prime or different q. So I can actually do this without actually taking any limits, right? This is valid for any n. I don't need to actually go to infinity. Now this seems a bit magical, right? I mean, I just said we need n to be, we discussed that we wanted n to be larger than 30 last time for the central limit theorem to kick in, and this one seems to tell me I can do it for any n. Now there will be a price to pay is that I pick up this 2 over b minus alpha squared. So that's the variance of the Gaussian that I have, right? Sort of. That's telling me what the variance should be, and this is actually not as nice. I pick factor 4 compared to the Gaussian that I would get for that. So let's try to solve it for our case. So I just told you try it. Did anybody try to do it? So we started from this last time, right? And the reason was that we could say that the probability that this thing exceeds q alpha over 2 is alpha. So that was using CLT, so let's just keep it here, and see what we would do differently. What Hoeffding tells me is that the probability that Xn bar minus-- well, what is mu in this case? It's p, right? It's just notation here. Mu was the average, but we call it p in the case of Bernoulli's, exceeds-- let's just call it epsilon for a second. So we said that this was bounded by what? So Hoeffding tells me that this is bounded by 2 times exponential minus 2. Now the nice thing is that I pick up a factor n here, epsilon squared. And what is b minus a squared for the Bernoulli's? 1. So I don't have a denominator here. And I'm going to do exactly what I did here. I'm going to set this guy to be equal to alpha. So that if I get alpha here, then that means that just solving for epsilon, I'm going to have some number, which will play the role of q alpha over 2, and then I'm going to be able to just say that p is between X bar and minus epsilon, and X bar n plus epsilon. OK, so let's do it. So we have to solve the equation. 2 exponential minus 2n epsilon squared equals alpha, which means that-- so here I'm going to get, there's a 2 right here. So that means that I get alpha over 2 here. Then I take the logs on both sides, and now let me just write it. And then I want to solve for epsilon. So that means that epsilon is equal to square root log q over alpha divided by 2n. Yes? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Why is b minus a 1? Well, let's just look, right? X lives in the interval b minus a. So I can take b to be 25, and a to be my negative 42. But I'm going to try to be as sharp as I can. All right, so what is the smallest value you can think of such that a Bernoulli random variable is larger than or equal to this value? What values does a Bernoulli random variable take? 0 and 1. So it takes values between 0 and 1. It just maxes the value. Actually, this is the worst possible case for the Hoeffding inequality. So now I just get this one, and so now you tell me that I have this thing. So when I solve this guy over there. So combining this thing and this thing implies that the probability that p lives between Xn bar minus square root log 2 over alpha divided by 2n and X bar plus the square root log 2 over alpha divided by 2n is equal to? I mean, is at least. What is it at least equal to? Here this controls the probability of them outside of this interval, right? It tells me the probability that Xn bar is far from p by more than epsilon. So there's a probability that they're actually outside of the interval that I just wrote. So it's 1 minus the probability of being in the interval. So this is at least 1 minus alpha. So I just use the fact that a probability of the complement is 1 minus the probability of the set. And since I have an upper bound on the probability of the set, I have a lower bound on the probability of the complement. So now it's a bit different. Before, we actually wrote something that was-- so let me get it back. So if we go back to the example where we took the [INAUDIBLE] over p, we got this guy. q alpha over square root of-- over 2 square root n. So we had Xn bar plus minus q alpha over 2 square root n. Actually, that was q alpha over 2n, I'm sorry about that. And so now we have something that replaces this q alpha, and it's essentially square root of 2 log 2 over alpha. Because if I replace q alpha by square root of 2 log 2 over alpha, I actually get exactly this thing here. And so the question is, what would you guess? Is this number, this margin, square root of log 2 over alpha divided by 2n, is it smaller or larger than this guy? q alpha all over 2/3n. Yes? Larger. Everybody agrees with this? Just qualitatively? Right, because we just made a very conservative statement. We do not use anything. This is true always. So it can only be better. The reason in statistics where you use those assumptions that n is large enough, that you have this independence that you like so much, and so you can actually have the central limit theorem kick in, all these things are for you to have enough assumptions so that you can actually make sharper and sharper decisions. More and more confident statement. And that's why there's all this junk science out there, because people make too much assumptions for their own good. They're saying, well, let's assume that everything is the way I love it, so that I can for sure any minor change, I will be able to say that's because I made an important scientific discovery rather than, well, that was just [INAUDIBLE] OK? So now here's the fun moment. And actually let me tell you why we look at this thing. So there's actually-- who has seen different types of convergence in the probability statistic class? [INAUDIBLE] students. And so there's different types of-- in the real numbers there's very simple. There's one convergence, Xn turns to X. To start thinking about functions, well, maybe you have uniform convergence, you have pointwise convergence. So if you've done some real analysis, you know there's different types of convergence you can think of. And in the convergence of random variables, there's also different types, but for different reasons. It's just because the question is, what do you do with the randomness? When you see that something converges to something, it probably means that you're willing to tolerate low probability things happening or where it doesn't happen, and on how you handle those, creates different types of convergence. So to be fair, in statistics the only convergence we care about is the convergence in distribution. That's this one. The one that comes from the central limit theorem. That's actually the weakest possible you could make. Which is good, because that means it's going to happen more often. And so why do we need this thing? Because the only thing we really need to do is to say that when I start computing probabilities on this random variable, they're going to look like probabilities on that random variable. All right, so for example, think of the following two random variables, x and minus x. So this is the same random variable, and this one is negative. When I look at those two random variables, think of them as a sequence, a constant sequence. These two constant sequences do not go to the same number, right? One is plus-- one is x, the other one is minus x. So unless x is the random variable always equal to 0, those two things are different. However, when I compute probabilities on this guy, and when I compute probabilities on that guy, they're the same. Because x and minus x have the same distribution just by symmetry of the gaps in random variables. And so you can see this is very weak. I'm not saying anything about the two random variables being close to each other every time I'm going to flip my coin, right? Maybe I'm going to press my computer and say, what is x? Well, it's 1.2. Negative x is going to be negative 1.2. Those things are far apart, and it doesn't matter, because in average those things are going to have the same probabilities that's happening. And that's all we care about in statistics. You need to realize that this is what's important, and why you need to know. Because you have it really good. If your problem is you really care more about convergence almost surely, which is probably the strongest you can think of. So what we're going to do is talk about different types of convergence not to just reflect on the fact on how our life is good. It's just that the problem is that when the convergence in distribution is so weak that it cannot do anything I want with it. In particular, I cannot say that if X converges, Xn converges in distribution, and Yn converges in distribution, then Xn plus Yn converge in distribution to the sum of their limits. I cannot do that. It's just too weak. Think of this example and it's preventing you to do quite a lot of things. So this is converge in distribution to sum n 0, 1. This is converge in distribution to sum n 0, 1. But their sum is 0, and it's certainly not-- it doesn't look like the sum of two independent Gaussian random variables, right? And so what we need is to have stronger conditions here and there, so that we can actually put things together. And we're going to have more complicated formulas. One of the formulas, for example, is if I replace p by p hat in this denominator. We mentioned doing this at some point. So I would need that p hat goes to p, but I need stronger than n distributions so that this happens. I actually need this to happen in a stronger sense. So here are the first two strongest sense in which random variables can converge. The first one is almost surely. And who has already seen this notation little omega when they're talking about random variables? All right, so very few. So this little omega is-- so what is a random variable? A random variable is something that you measure on something that's random. So the example I like to think of is, if you take a ball of snow, and put it in the sun for some time. You come back. It's going to have a random shape, right? It's going to be a random blurb of something. But there's still a bunch of things you can measure on it. You can measure its volume. You can measure its inner temperature. You can measure its surface area. All these things are random variables, but the ball itself is omega. That's the thing on which you make your measurement. And so a random variable is just a function of those omegas. Now why do we make all these things fancy? Because you cannot take any function. This function has to be what's called measurable, and there's entire courses on measure theory, and not everything is measurable. And so that's why you have to be a little careful why not everything is measurable, because you need some sort of nice property. So that the measure of something, the union of two things, is less than the sum of the measures, things like that. And so almost surely is telling you that for most of the balls, for most of the omegas, that's the right-hand side. The probability of omega is such that those things converge to each other is actually equal to 1. So it tells me that for almost all omegas, all the omegas, if I put them together, I get something that has probability of 1. It might be that there are other ones that have probability 0, but what it's telling is that this thing happens for all possible realization of the underlying thing. That's very strong. It essentially says randomness does not matter, because it's happening always. Now convergence in probability allows you to squeeze a little bit of probability under the rock. It tells you I want the convergence to hold, but I'm willing to let go of some little epsilon. So I'm willing to allow Tn to be less than epsilon. Tn minus T to be-- sorry, to be larger than epsilon. But the problem is they want this to go to 0 as well as n goes to infinity, but for each n this thing does not have to be 0, which is different from here, right? So this probability here is fine. So it's a little weaker, but it's a slightly different one. I'm not going to ask you to learn and show that one is weaker than the other one. But just know that these are two different types. This one is actually much easier to check than this one. Then there's something called convergence in Lp. So this one is the fact that it embodies the following fact. If I give you a random variable with mean 0, and I tell you that its variance is going to 0, right? You have a sequence of random variables, their mean is 0, their expectation is 0, but their variance is going to 0. So think of Gaussian random variables with mean 0, and a variance that shrinks to 0. And this random variable converges to a spike at 0, so it converges to 0, right? And so what I mean by that is that to have this convergence, all I had to tell you was that the variance was going to 0. And so in L2 this is really what it's telling you. It's telling you, well, if the variance is going to 0-- well, it's for any random variable T, so here what I describe was for a deterministic. So Tn goes to a random variable T. If you look at the square-- the expectation of the square distance, and it goes to 0. But you don't have to limit yourself to the square. You can take power of three. You can take power 67.6, power of 9 pi. You take whatever power you want, it can be fractional. It has to be lower than 1, and that's the convergence in Lp. But we mostly care about integer p. And then here's our star, the convergence in distribution, and that's just the one that tells you that when I start computing probabilities on the Tn, they're going to look very close to the probabilities on the T. So that was our Tn with this guy, for example, and T was this standard Gaussian distribution. Now here, this is not any probability. This is just the probability then less than or equal to x. But if you remember your probability class, if you can compute those probabilities, you can compute any probabilities just by subtracting and just building things together. Well, I need this for all x's, so I want this for each x, So you fix x, and then you make the limit go to infinity. You make n go to infinity, and I want this for the point x's at which the cumulative distribution function of T is continuous. There might be jumps, and that I don't actually care for those. All right, so here I mentioned it for random variables. If you're interested, there's also random vectors. A random vector is just a table of random variables. You can talk about random matrices. And you can talk about random whatever you want. Every time you have an object that's just collecting real numbers, you can just plug random variables in there. And so there's all these definitions that [? extend. ?] So where I see you see an absolute value, we'll see a norm. Things like this. So I'm sure this might look scary a little bit, but really what we are going to use is only the last one, which as you can see is just telling you that the probabilities converge to the probabilities. But I'm going to need the other ones every once in a while. And the reason is, well, OK, so here I'm actually going to the important characterizations of the convergence in distribution, which is R convergence style. So i converge in distribution if and only if for any function that's continuous and bounded, when I look at the expectation of f of Tn, this converges to the expectation of f of T. OK, so this is just those two things are actually equivalent. Sometimes it's easier to check one, easier to check the other, but in this class you won't have to prove that something converges in distribution other than just combining our existing convergence results. And then the last one which is equivalent to the above two is, anybody knows what the name of this quantity is? This expectation here? What is it called? The characteristic function, right? And so this i is the complex i, and is the complex number. And so it's essentially telling me that, well, rather than actually looking at all bounded and continuous but real functions, I can actually look at one specific family of complex functions, which are the functions that maps T to E to the ixT for x and R. That's a much smaller family of functions. All possible continuous embedded functions has many more elements than just the real element. And so now I can show that if I limit myself to do it, it's actually sufficient. So those three things are used all over the literature just to show things. In particular, if you're interested in deep digging a little more mathematically, the central limit theorem is going to be so important. Maybe you want to read about how to prove it. We're not going to prove it in this class. There's probably at least five different ways of proving it, but the most canonical one, the one that you find in textbooks, is the one that actually uses the third element. So you just look at the characteristic function of the square root of n Xn bar minus say mu, and you just expand the thing, and this is what you get. And you will see that in the end, you will get the characteristic function of a Gaussian. Why a Gaussian? Why does it kick in? Well, because what is the characteristic function of a Gaussian? Does anybody remember the characteristic function of a standard Gaussian? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: Yeah, well, I mean there's two pi's and stuff that goes away, right? A Gaussian is a random variable. A characteristic function is a function, and so it's not really itself. It looks like itself. Anybody knows what the actual formula is? Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: E to the minus? AUDIENCE: E to the minus x squared over 2. PHILIPPE RIGOLLET: Exactly. E to the minus x squared over 2. But this x squared over 2 is actually just the second order expansion in the Taylor expansion. And that's why the Gaussian is so important. It's just the second order Taylor expansion. And so you can check it out. I think Terry Tao has some stuff on his blog, and there's a bunch of different proofs. But if you want to prove convergence in distribution, you very likely are going to use one this three right here. So let's move on. This is when I said that this convergence is weaker than that convergence. This is what I meant. If you have convergence in one style, it implies convergence in the other stuff. So the first [INAUDIBLE] is that if Tn converges almost surely, this a dot s dot means almost surely, then it also converges in probability and actually the two limits, which are this random variable T, are equal almost surely. Basically what it means is that whatever you measure one is going to be the same that you measure on the other one. So that's very strong. So that means that convergence almost surely is stronger than convergence in probability. If you're converge in Lp then you also converge in Lq for sum q less than p. So if you converge in L2, you'll also converge in L1. If you converge in L67, you converge in L2. If you're converge in L infinity, you converge in Lp for anything. And so, again, limits are equal. And then when you converge in distribution, when you converge in probability, you also converge in distribution. OK, so almost surely implies probability. Lp implies probability. Probability implies distribution. And here note that I did not write, and the limits are equal almost surely. Why? Because the convergence in distribution is actually not telling you that your random variable is converging to another random variable. It's telling you that the distribution of your random variable is converging to a distribution. And think of this, guys. x and minus x. The central limit theorem tells me that I'm converging to some standard Gaussian distribution, but am I converging to x or am I converging to minus x? It's not well identified. It's any random variable that has this distribution. So there's no way the limits are equal. Their distributions are going to be the same, but they're not the same limit. Is that clear for everyone? So in a way, convergence in distribution is really not a convergence of a random variable towards another random variable. It's just telling you the limiting distribution of your random variable [INAUDIBLE] which is enough for us. And one thing that's actually really nice is this continuous mapping theorem, which essentially tells you that-- so this is one of the theorems that we like, because they tell us you can do what you feel like you want to do. So if I have Tn that goes to T, f of Tn goes to f of T, and this is true for any of those convergence except for Lp. But they have to have f, which is continuous, otherwise weird stuff can happen. So this is going to be convenient, because here I don't have X to n minus p. I have a continuous function. It's between a linear function of Xn minus p, but I could think of like even crazier stuff to do, and it would still be true. If I took the square, it would converge to something that looks like its distribution. It's the same as the distribution of a square Gaussian. So this is a mouthful, these two slides-- actually this particular slide is a mouthful. What I have in my head since I was pretty much where you're sitting, is this diagram. So what it tells me-- so it's actually voluntarily cropped, so you can start from any Lq you want large. And then as you decrease the index, you are actually implying, implying, implying until you imply convergence in probability. Convergence almost surely implies convergence in probability, and everything goes to the [? sync, ?] that is convergence in distribution. So everything implies convergence in distribution. So that's basically rather than remembering those formulas, this is really the diagram you want to remember. All right, so why do we bother learning about those things. That's because of this limits and operations. Operations and limits. If I have a sequence of real numbers, and I know that Xn converges to X and Yn converges to Y, then I can start doing all my manipulations and things are happy. I can add stuff. I can multiply stuff. But it's not true always for convergence in distribution. But it is, what's nice, it's actually true for convergence almost surely. Convergence almost surely everything is true. It's just impossible to make it fail. But convergence in probability is not always everything, but at least you can actually add stuff and multiply stuff. And it will still give you the sum of the n, and the product of the n. You can even take the ratio if V is not 0 of course. If the limit is not 0, then actually you need Vn to be not 0 as well. You can actually prove this last statement, right? Because it's a combination of the first statement of the second one, and the continuous mapping theorem. Because the function that maps x to 1 over x on everything but 0, is continuous. And so 1 over Vn converges to 1 over V, and then I can multiply those two things. So you actually knew that one. But really this is not what matters, because this is something that you will do whatever happens. If I don't tell you you cannot do it, well, you will do it. But in general those things don't apply to convergence in distribution unless the pair itself is known to converge in distribution. Remember when I said that these things apply to vectors, then you need to actually say that the vector converges in distributions to the limiting factor. Now this tells you in particular, since the cumulative distribution function is not defined for vectors, I would have to actually use one of the other distributions, one of the other criteria, which is convergence of characteristic functions or convergence of a function of bounded continuous function of the random variable. 0.2 or 0.3, but 0.1 is not going get you anywhere. But this is something that's going to be too hard for us to deal with, so we're actually going to rely on the fact that we have something that's even better. There's something that is waiting for us at the end of his lecture, which is called Slutsky's that says that if V, in this case, converges in probability but U converge in distribution, I can actually still do that. I actually don't need both of them to converge in probability. I actually need only one of them to converge in probability to make this statement. But two sum. So let's go to another example. So I just want to make sure that we keep on doing statistics. And every time we're going to just do a little bit too much probability, I'm going to reset the pressure, and start doing statistics again. All right, so assume you observe the times the inter-arrival time of the T at Kendall. So this is not the arrival time. It's not like 7:56, 8:15. No, it's really the inter-arrival time, right? So say the next T is arriving in six minutes. So let's say [INAUDIBLE] bound. And so you have this inter-arrival time. So those are numbers say, 3, 4, 5, 4, 3, et cetera. So I have this sequence of numbers. So I'm going to observe this, and I'm going to try to infer what is the rate of T's going out of the station from this. So I'm going to assume that these things are mutually independent. That's probably not completely true. Again, it just means that what it would mean is that two consecutive inter-arrival times are independent. I mean, you can make it independent if you want, but again, this independent assumption is for us to be happy and safe. Unless someone comes with overwhelming proof that it's not independent and far from being independent, then yes, you have a problem. But it might be the fact that it's actually-- if you have a T that's one hour late. If an inter-arrival time is one hour, then the other T, either they fixed it, and it's going to be just 30 seconds behind, or they haven't fixed it, then it's going to be another hour behind. So they're not exactly independent, but they are when things work well and approximate. And so now I need to model a random variable that's positive, maybe not upper bounded. I mean, people complain enough that this thing can be really large. And so one thing that people like for inter-arrival times is exponential distribution. So that's a positive random variable. Looks like an exponential on the right-hand slide, on the positive line. And so it decays very fast towards 0. The probability that you have very large values exponentially small, and there's a [INAUDIBLE] lambda that controls how exponential is defined. It's exponential minus lambda times something. And so we're going to assume that they have the same distribution, the same random variable. So they're IID, because they are independent, and they're identically distributed. They all have this exponential with parameter lambda, and I'm going to try to learn something about lambda. What is the estimated value of lambda, and can I build a confidence interval for lambda. So we observe n arrival times. So as I said, the mutual independence is plausible, but not completely justified. The fact that they're exponential is actually something that people like in all this what's called queuing theory. So exponentials arise a lot when you talk about inter-arrival times. It's not about the bus, but where it's very important is call centers, service, servers where tasks come, and people want to know how long it's going to take to serve a task. So when I call at a center, nobody knows how long I'm going to stay on the phone with this person. But it turns out that empirically exponential distributions have been very good at modeling this. And what it means is that they're actually-- you have this memoryless property. It's kind of crazy if you think about it. What does that thing say? Let's parse it. That's the probability. So this is condition on the fact that T1 is larger than T. So T1 is just say the first arrival time. That means that conditionally on the fact that I've been waiting for the first T, well, the first [INAUDIBLE]. Well, I should probably-- the first subway for more than T conditionally-- so I've been there T minutes already. Then the probability that I wait for s more minutes. So that's the probability that T1 is learned, and the time that we've already waited plus x. Given that I've been waiting for T minutes, really I wait for s more minutes, is actually the probability that I wait for s minutes total. It's completely memoryless. It doesn't remember how long have you been waiting. The probability does not change. You can have waited for two hours, the probability that it takes another 10 minutes is going to be the same as if you had been waiting for zero minutes. And that's something that's actually part of your problem set. Very easy to compute. This is just an analytical property. And you just manipulate functions, and you see that this thing just happen to be true, and that's something that people like. Because that's also something that benefit. And also what we like is that this thing is positive almost surely, which is good when you model arrival times. To be fair, we're not going to be that careful. Because sometimes we are just going to assume that something follows a normal distribution. And in particular, I mean, I don't know if we're going to go into that details, but a good thing that you can model with a Gaussian distribution are heights of students. But technically with positive probability, you can have a negative Gaussian random variable, right? And the probability being it's probably 10 to the minus 25, but it's positive. But it's good enough for us for our modeling. So this thing is nice, but this is not going to be required. When you're modeling positive random variables, you don't always have to use positive distributions that are supported on positive numbers. You can use distributions like Gaussian. So now this exponential distribution of T1, Tn they have the same parameter, and that means that in average they have the same inter-arrival time. So this lambda is actually the expectation. And what I'm just saying is that they're identically distributed means that I mean some sort of a stationary regime, and it's not always true. I have to look at a shorter period of time, because at rush hour and 11:00 PM clearly those average inter-arrival times are going to be different So it means that I am really focusing maybe on rush hour. Sorry, I said it's lambda. It's actually 1 over lambda. I always mix the two. All right, so you have the density of T1. So f of T is this. So it's on the positive real line. The fact that I have strictly positive or larger [INAUDIBLE] to 0 doesn't make any difference. So this is the density. So it's lambda E to the minus lambda T. The lambda in front just ensures that when I integrate this function between 0 and infinity, I get 1. And you can see, it decays like exponential minus lambda T. So if I were to draw it, it would just look like this. So at 0, what value does it take? Lambda. And then I decay like exponential minus lambda T. So this is 0, and this is f of T. So very small probability of being very large. Of course, it depends on lambda. Now the expectation, you can compute the expectation of this thing, right? So you integrate T times f of T. This is part of the little sheet that I gave you last time. This is one of the things you should be able to do blindfolded. And then you get the expectation of T1 is 1 over lambda. That's what comes out. So as I actually tell many of my students, 99% of statistics is replacing expectations by averages. And so what you're tempted to do is say, well, if in average I'm supposed to see 1 over lambda, I have 15 observations. I'm just going to average those observations, and I'm going to see something that should be close to 1 over lambda. So statistics is about replacing averages, expectations with averages, and that's we do. So Tn bar here, which is the average of the Ti's, is a pretty good estimator for 1 over lambda. So if I want an estimate for lambda, then I need to take 1 over Tn bar. So here is one estimator. I did it without much principle except that I just want to replace expectations by averages, and then I fixed the problem that I was actually estimating 1 over lambda by lambda. But you could come up with other estimators, right? But let's say this is my way of getting to that estimator. Just like I didn't give you any principled way of getting p hat, which is Xn bar in the kiss example. But that's the natural way to do it. Everybody is completely shocked by this approach? All right, so let's do this. So what can I say about the properties of this estimator lambda hat? Well, I know that Tn bar is going to 1 over lambda by the law of large number. It's an average. It converges to the expectation both almost surely, and in probability. So the first one is the strong law of large number, the second one is the weak law of large number. I can apply the strong one. I have enough conditions. And hence, what do I apply so that 1 over Tn bar actually goes to lambda? So I said hence. What is hence? What is it based on? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET Yeah, continuous mapping theorem, right? So I have this function 1 over x. I just apply this function. So if it was 1 over lambda squared, I would have the same thing that would happen just because the function 1 over x is continuous away from 0. And now the central limit theorem is also telling me something about lambda. About Tn bar, right? It's telling me that if I look at my average, I remove the expectation here. So if I do Tn bar minus my expectation, rescale by this guy here, then this thing is going to converge to some Gaussian random variable, but here I have this lambda to the negative 1-- to the negative 2 here, and that's because they did not tell you that if you compute the variance-- so from this, you can probably extract. So if I have X that follows some exponential distribution with parameter lambda. Well, let's call it T. So we know that T in expectation, the expectation of T is 1 over lambda. What is the variance of T? You should be able to read it from the thing here. 1 over lambda squared. That's what you actually read in the variance, because the central limit theorem is really telling you the distribution goes through this n. But this numbers and this number you can read, right? If you look at the expectation of this guy it's-- of this guy comes out. This is 1 over lambda minus 1 over lambda. That's why you read the 0. And if you look at the variance of the dot, you get n times the variance of this average. Variance of the average is picking up a factor 1 over n. So the n cancels. And then I'm left with only one of the variances, which is 1 over lambda squared. OK, so we're not going to do that in details, because, again, this is just a pure calculus exercise. But this is if you compute integral of lambda e to the minus t lambda times t squared. Actually t minus 1 over lambda squared dt between 0 and infinity. You will see that this thing is 1 over lambda squared. How would I do this? Configuration by [INAUDIBLE] or you know it. All right. So this is what the central limit theorem tells me. So this gives me if I solve this, and I plug in so I can multiply by lambda and solve, it would give me somewhat a confidence interval for 1 over lambda. If we just think of 1 over lambda as being the p that I had before, this would give me a central limit theorem for-- sorry, a confidence interval for 1 over lambda. So I'm hiding a little bit under the rug the fact that I have to still define it. Let's just actually go through this. I see some of you are uncomfortable with this, so let's just do it. So what we've just proved by the central limit theorem is that the probability, that's square root of n Tn minus 1 over lambda exceeds q alpha over 2 is approximately equal to alpha, right? That's just the statement of the central limit theorem, and by approximately equal I mean as n goes to infinity. Sorry I did not write it correctly. I still have to divide by square root of 1 over lambda squared, which is the standard deviation, right? And we said that this is a bit ugly. So let's just do it the way it should be. So multiply all these things by lambda. So that means now that the absolute value, so with probability 1 minus alpha asymptotically, I have that square root of n times lambda Tn minus 1 is less than or equal to q alpha over 2. So what it means is that, oh, I have negative q alpha over 2 less than square root of n. Let me divide by square root of n here. lambda Tn minus 1 q alpha over 2. And so now what I have is that I get that lambda is between-- that's Tn bar-- is between 1 plus q alpha over 2 divided by root n. And the whole thing is divided by Tn bar, and same thing on the other side except I have 1 minus q alpha over 2 divided by root n divided by Tn bar. So it's kind of a weird shape, but it's still of the form 1 over Tn bar plus or minus something. But this something depends on Tn bar itself. And that's actually normal, because Tn bar is not only giving me information about the mean, but it's also giving me information about the variance. So it should definitely come in the size of my error bars. And that's the way it comes in this fairly natural way. Everybody agrees? So now I have actually built a confidence interval. But what I want to show you with this example is, can I translate this in a central limit theorem for something that converges to lambda, right? I know that Tn bar converges to 1 over lambda, but I also know that 1 over Tn bar converges to lambda. So do I have a central limit theorem for 1 over Tn bar? Technically no, right? Central limit theorems are about averages, and 1 over an average is not an average. But there's something that statisticians like a lot, and it's called the Delta method. The Delta method is really something that's telling you that you can actually take a function of an average, and let it go to the function of the limit, and you still have a central limit theorem. And the factor or the price to pay for this is something which depends on the derivative of the function. And so let's just go through this, and it's, again, just like the proof of the central limit theorem. And actually in many of those asymptotic statistics results, this is actually just a Taylor expansion, and here it's not even the second order, it's actually the first order, all right? So I'm just going to do linear approximation of this function. So let's do it. So I have that g of Tn bar-- actually let's use the notation of this slide, which is Zn and theta. So what I know is that Zn minus theta square root of n goes to some Gaussian, this standard Gaussian. No, not standard. OK, so that's the assumptions. And what I want to show is some convergence of g of Zn to g of theta. So I'm not going to multiply by root n just yet. So I'm going to do a first order Taylor expansion. So what it is telling me is that this is equal to Zn minus theta times g prime of, let's call it theta bar where theta bar is somewhere between say Zn and theta, for sum. OK, so if theta is less than Zn you just permute those two. So that's what the Taylor first order Taylor expansion tells me. There exists a theta bar that's between the two values at which I'm expanding so that those two things are equal. Is everybody shocked? No? So that's standard Taylor expansion. Now I'm going to multiply by root n. And so that's going to be what? That's going to be root n Zn minus theta. Ah-ha, that's something I like. Times g prime of theta bar. Now the central limit theorem tells me that this goes to what? Well, this goes to sum n 0 sigma squared, right? That was the first line over there. This guy here, well, it's not clear, right? Actually it is. Let's start with this guy. What does theta bar go to? Well, I know that Zn is going to theta. Just because, well, that's my law of large numbers. Zn is going to theta, which means that theta bar is sandwiched between two values that converge to theta. So that means that theta bar converges to theta itself as n goes to infinity. That's just the law of large numbers. Everybody agrees? Just because it's sandwiched, right? So I have Zn. I have theta, and theta bar is somewhere here. The picture might be reversed. It might be that Zn end is larger than theta. But the law of large number tells me that this guy is not moving, but this guy is moving that way. So you know when n is [INAUDIBLE],, there's very little wiggle room for theta bar, and it can only get to theta. And I call it the sandwich theorem, or just find your favorite food in there. So this guy goes to theta, and now I need to make an extra assumption, which is that g prime is continuous. And if g prime is continuous, then g prime of theta bar goes to g prime of theta. So this thing goes to g prime of theta. But I have an issue here. Is that now I have something that converges in distribution and something that converges in say-- I mean, this converges almost surely or saying probability just to be safe. And this one converges in distribution. And I want to combine them. But I don't have a slide that tells me I'm allowed to take the product of something that converges in distribution, and something that converges in probability. This does not exist. Actually, if anything it told me, do not do anything with things that converge in distribution. And so that gets us to our-- OK, so I'll come back to this in a second. And that gets us to something called Slutsky's theorem. And Slutsky's theorem tells us that in very specific cases, you can do just that. So you have two sequences of random variables, Xn bar, that's Xn that converges to X. And Yn that converges to Y, but Y is not anything. Y is not any random variable. So X converges in this distribution. Sorry, I forgot to mention, this is very important. Xn converges in distribution, Y converges in probability. And we know that in generality we cannot combine those two things, but Slutsky tells us that if the limit of Y is a constant, meaning it's not a random variable, but it's a deterministic number 2, just a fixed number that's not a random variable, then you can combine them. Then you can sum them, and then you can multiply them. I mean, actually you can do whatever combination you want, because it actually implies that X, the vector Xn, Yn converges to the vector Xc. OK, so here I just took two combinations. They are very convenient for us, the sum and the product so I could do other stuff like the ratio if c is not 0, things like that. So that's what Slutsky does for us. So what you're going to have to write a lot in your homework, in your mid-terms, by Slutsky. I know some people are very generous with their by Slutsky. They just do numerical applications, mu is equal to 6, and therefore by Slutsky mu square is equal to 36. All right, so don't do that. Just use, write Slutsky when you're actually using Slutsky. But this is something that's very important for us, and it turns out that you're going to feel like you can write by Slutsky all the time, because that's going to work for us all the time. Everything we're going to see is actually going to be where we're going to have to combine stuff. Since we only rely on convergence from distribution arising from the central limit theorem, we're actually going to have to rely on something that allows us to combine them, and the only thing we know is Slutsky. So we better hope that this thing works. So why Slutsky works for us. Can somebody tell me why Slutsky works to combine those two guys? So this one is converging in distribution. This one is converging in probability, but to a deterministic number. g prime of theta is a deterministic number. I don't know what theta is, but it's certainly deterministic. All right, so I can combine them, multiply them. So that's just the second line of that in particular. All right, everybody is with me? So now I'm allowed to do this. You can actually-- you will see something like counterexample questions in your problem set just so that you can convince yourself. It's always a good thing. I don't like to give them, because I think it's much better for you to actually come to the counterexample yourself. Like what can go wrong if Y is not a random-- sorry, if Y is not a-- sorry, if c is not the constant, but it's a random variable. You can figure that out. All right, so let's go back. So we have now this Delta method that tells us that now I have a central limit theorem for functions of averages, and not just for averages. So the only price to pay is this derivative there. So, for example, if g is just a linear function, then I'm going to have a constant multiplication. If g is a quadratic function, then I'm going to have theta squared that shows up there. Things like that. So just think of what kind of applications you could have for this. Here are the functions that we're interested in, is x maps to 1 over x. What is the derivative of this guy? What is the derivative of 1 over x? Negative 1 over x squared, right? That's the thing we're going to have to put in there. And so this is what we get. So now when I'm actually going to write this, so if I want to show square root of n lambda hat minus lambda. That's my application, right? This is actually 1 over Tn, and this is 1 over 1 over lambda. So the function g of x is 1 over x in this case. So now I have this thing. So I know that by the Delta method-- oh, and I knew that Tn, remember, square root of Tn minus 1 over lambda was going to sum normal with mean 0 and variance 1 over lambda squared, right? So the sigma square over there is 1 over lambda squared. So now this thing goes to what? Sum normal. What is going to be the mean? 0. And what is the variance? So the variance is going-- I'm going to pick up this guy, 1 over lambda squared, and then I'm going to have to take g prime of what? Of 1 over lambda, right? That's my theta. So I have g of theta, which is 1 over theta. So I'm going to have g prime of 1 over lambda. And what is g prime of 1 over lambda? So we said that g prime is 1 over negative 1 over x squared. So it's negative 1 over 1 over lambda squared-- sorry, squared. Which is nice, because g can be decreasing. So that would be annoying to have a negative variance. And so g prime is negative 1 over, and so what I get eventually is lambda squared up here, but then I square it again. So this whole thing here becomes what? Can somebody tell me what the final result is? Lambda squared right? So it's lambda 4 divided by lambda 2. So that's what's written there. And now I can just do my good old computation for a-- I can do a good computation for a confidence interval. All right, so let's just go from the second line. So we know that lambda hat minus lambda is less than, we've done that several times already. So it's q alpha over 2-- sorry, I should put alpha over 2 over this thing, right? So that's really the quintile of what our alpha over 2 times lambda divided by square root of n. All right, and so that means that my confidence interval should be this, lambda hat. Lambda belongs to lambda plus or minus q alpha over 2 lambda divided by root n, right? So that's my confidence interval. But again, it's not very suitable, because-- sorry, that's lambda hat. Because they don't know how to compute it. So now I'm going to request from the audience some remedies for this. What do you suggest we do? What is the laziest thing I can do? Anybody? Yeah. AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET Replace lambda by lambda hat. What justifies for me to do this? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET Yeah, and Slutsky tells me I can actually do it, because Slutsky tells me, where does this lambda come from, right? This lambda comes from here. That's the one that's here. So actually I could rewrite this entire thing as square root of n lambda hat minus lambda divided by lambda converges to sum n 0, 1. Now if I replace this by lambda hat, what I have is that this is actually really the original one times lambda divided by lambda hat. And this converges to n 0, 1, right? And now what you're telling me is, well, this guy I know it converges to n 0, 1, and this guy is converging to 1 by the law of large number. But this one is converging to 1, which happens to be a constant. It converges in probability, so by Slutsky I can actually take the product and still maintain my conversion to distribution to a standard Gaussian. So you can always do this. Every time you replace some p by p hat, as long as their ratio goes to 1, which is going to be guaranteed by the law of large number, you're actually going to be fine. And that's where we're going to use Slutsky a lot. When we do plug in, Slutsky is going to be our friend. OK, so we can do this. And that's one way. And then other ways to just solve for lambda like we did before. So the first one we got is actually-- I don't know if I still have it somewhere. Yeah, that was the one, right? So we had 1 over Tn q, and that's exactly the same that we have here. So your solution is actually giving us exactly this guy when we actually solve for lambda. So this is what we get. Lambda hat. We replace lambda by lambda hat, and we have our asymptotic convergence theorem. And that's exactly what we did in Slutsky's theorem. Now we're getting to it at this point is just telling us that we can actually do this. Are there any questions about what we did here? So this derivation right here is exactly what I did on the board I showed you. So let me just show you with a little more space just so that we all understand, right? So we know that square root of n lambda hat minus lambda divided by lambda, the true lambda defined converges to sum n 0, 1. So that was CLT plus Delta method. Applying those two, we got to here. And we know that lambda hat converges to lambda in probability and almost surely, and that's what? That was law of large number plus continued mapping theorem, right? Because we only knew that one of our lambda hat converges to 1 over lambda. So we had to flip those things around. And now what I said is that I apply Slutsky, so I write square root of n lambda hat minus lambda divided by lambda hat, which is the suggestion that was made to me. They said, I want this, but I would want to show that it converges to sum n 0, 1 so I can legitimately use q alpha over 2 in this one though. And the way we said is like, well, this thing is actually really q divided by lambda times lambda divided by lambda hat. So this thing that was proposed to me, I can decompose it in the product of those two random variables. The first one here converges through the Gaussian from the central limit theorem. And the second one converges to 1 from this guy, but in probability this time. That was the ratio of two things in probability, we can actually get it. And so now I apply Slutsky. And Slutsky tells me that I can actually do that. But when I take the product of this thing that converges to some standard Gaussian, and this thing that converges in probability to 1, then their product actually converges to still this standard Gaussian [INAUDIBLE] Well, that's exactly what's done here, and I think I'm getting there. So in our case, OK, so just a remark for Slutsky's theorem. So that's the last line. So in the first example we used the problem dependent trick, which was to say, well, turns out that we knew that p is between 0 and 1. So we have this p 1 minus p that was annoying to us. We just said, let's just bound it by 1/4, because that's going to be true for any value of p. But here, lambda takes any value between 0 and infinity, so we didn't have such a trick. It's something like we could see that lambda was less than something. Maybe we know it, in which case we could use that. But then in this case, we could actually also have used Slutsky's theorem by doing plug in, right? So here this is my p 1 minus p that's replaced by p hat 1 minus p hat. And Slutsky justify, so we did that without really thinking last time. But Slutsky actually justifies the fact that this is valid, and still allows me to use this q alpha over 2 here. All right, so that's the end of this lecture. Tonight I will post the next set of slides, chapter two. And, well, hopefully the video. I'm not sure when it's going to come out.
MIT_18650_Statistics_for_Applications_Fall_2016
17_Bayesian_Statistics.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit [email protected] PHILIPPE RIGOLLET: So today WE'LL actually just do a brief chapter on Bayesian statistics. And there's entire courses on Bayesian statistics, there's entire books on Bayesian statistics, there's entire careers in Bayesian statistics. So admittedly, I'm not going to be able to do it justice and tell you all the interesting things that are happening in Bayesian statistics. But I think it's important as a statistician to know what it is, how it works, because it's actually a weapon of choice for many practitioners. And because it allows them to incorporate their knowledge about a problem in a fairly systematic manner. So if you look at like, say the Bayesian statistics literature, it's huge. And so here I give you sort of a range of what you can expect to see in Bayesian statistics from your second edition of a traditional book, something that involves computation, some things that involve risk thinking. And there's a lot of Bayesian thinking. There's a lot of things that you know talking about sort of like philosophy of thinking Bayesian. This book, for example, seems to be one of them. This book is definitely one of them. This one represents sort of a wide, a broad literature on Bayesian statistics, for applications for example, in social sciences. But even in large scale machine learning, there's a lot of Bayesian statistics happening, particular using something called Bayesian parametrics, or hierarchical Bayesian modeling. So we do have some experts at MIT in the c-cell. Tamara Broderick for example, is a person who does quite a bit of interesting work on Bayesian parametrics. And if that's something you want to know more about, I urge you to go and talk to her. So before we go into more advanced things, we need to start with what is the Bayesian approach. What do Bayesians do, and how is it different from what we've been doing so far? So to understand the difference between Bayesians and what we've been doing so far is, we need to first put a name on what we've been doing so far. It's called frequentist statistics. Which usually Bayesian versus frequentist statistics, by versus I don't mean that there is naturally in opposition to them. Actually, often you will see the same method that comes out of both approaches. So let's see how we did it, right. The first thing, we had data. We observed some data. And we assumed that this data was generated randomly. The reason we did that is because this would allow us to leverage tools from probability. So let's say by nature, measurements, you do a survey, you get some data. Then we made some assumptions on the data generating process. For example, we assumed they were iid. That was one of the recurring things. Sometimes we assume it was Gaussian. If you wanted to use say, T-test. Maybe we did some nonparametric statistics. We assume it was a smooth function or maybe linear regression function. So those are our modeling. And this was basically a way to say, well, we're not going to allow for any distributions for the data that we have. But maybe a small set of distributions that indexed by some small parameters, for example. Or at least remove some of the possibilities. Otherwise, there's nothing we can learn. And so for example, this was associated to some parameter of interest, say data or beta in the regression model. Then we had this unknown problem and this unknown thing, a known parameter. And we wanted to find it. We wanted to either estimate it or test it, or maybe find a confidence interval for the subject. So, so far I should not have said anything that's new. But this last sentence is actually what's going to be different from the Bayesian part. And particular, this unknown but fixed things is what's going to be changing. In the Bayesian approach, we still assume that we observe some random data. But the generating process is slightly different. It's sort of a two later process. And there's one process that generates the parameter and then one process that, given this parameter generates the data. So what the first layer does, nobody really believes that there's some random process that's happening, about generating what is going to be the true expected number of people who turn their head to the right when they kiss. But this is actually going to be something that brings us some easiness for us to incorporate what we call prior belief. We'll see an example in a second. But often, you actually have prior belief of what this parameter should be. When we, say least squares, we looked over all of the vectors in all of R to the p, including the ones that have coefficients equal to 50 million. Those are things that we might be able to rule out. We might be able to rule out that on a much smaller scale. For example, well I'm not an expert on turning your head to the right or to the left. But maybe you can rule out the fact that almost everybody is turning their head in the same direction, or almost everybody is turning their head to another direction. So we have this prior belief. And this belief is going to play say, hopefully less and less important role as we collect more and more data. But if we have a smaller amount of data, we might want to be able to use this information, rather than just shooting in the dark. And so the idea is to have this prior belief. And then, we want to update this prior belief into what's called the posterior belief after we've seen some data. Maybe I believe that there's something that should be in some range. But maybe after I see data, it's comforting me in my beliefs. So I'm actually having maybe a belief that's more. So belief encompasses basically what you think and how strongly you think about it. That's what I call belief. So for example, if I have a belief about some parameter theta, maybe my belief is telling me where theta should be and how strongly I believe in it, in the sense that I have a very narrow region where theta could be. The posterior beliefs, as well, you see some data. And maybe you're more confident or less confident about what you've seen. Maybe you've shifted your belief a little bit. And so that's what we're going to try to see, and how to do this in a principal manner. To understand this better, there's nothing better than an example. So let's talk about another stupid statistical question. Which is, let's try to understand p. Of course, I'm not going to talk about politics from now on. So let's talk about p, the proportion of women in the population. And so what I could do is to collect some data, X1, Xn and assume that they're Bernoulli with some parameter, p unknown. So p is in 0, 1. OK, let's assume that those guys are iid. So this is just an indicator for each of my collected data, whether the person I randomly sample is a woman, I get a one. If it's a man, I get a zero. Now the question is, I sample these people randomly. I do you know their gender. And the frequentist approach was just saying, OK, let's just estimate p hat being Xn bar. And then we could do some tests. So here, there's a test. I want to test maybe if p is equal to 0.5 or not. That sounds like a pretty reasonable thing to test. But we want to also maybe estimate p. But here, this is a case where we definitely prior belief of what p should be. We are pretty confident that p is not going to be 0.7. We actually believe that we should be extremely close to one half, but maybe not exactly. Maybe this population is not the population in the world. But maybe this is the population of, say some college and we want to understand if this college has half women or not. Maybe we know it's going to be close to one half, but maybe we're not quite sure. We're going to want to integrate that knowledge. So I could integrate it in a blunt manner by saying, discard the data and say that p is equal to one half. But maybe that's just a little too much. So how do I do this trade off between adding the data and combining it with this prior knowledge? In many instances, essentially what's going to happen is this one half is going to act like one new observation. So if you have five observations, this is just the sixth observation, which will play a role. If you have a million observations, you're going to have a million and one. It's not going to play so much of a role. That's basically how it goes. But, definitely not always because we'll see that if I take my prior to be a point minus one half here, it's basically as if I was discarding my data. So essentially, there's also your ability to encompass how strongly you believe in this prior. And if you believe infinitely more in the prior than you believe in the data you collected, then it's not going to act like one more observation. The Bayesian approach is a tool to one, include mathematically our prior. And our prior belief into statistical procedures. Maybe I have this prior knowledge. But if I'm a medical doctor, it's not clear to me how I'm going to turn this into some principal way of building estimators. And the second goal is going to be to update this prior belief into a posterior belief by using the data. How do I do this? And at some point, I sort of suggested that there's two layers. One is where you draw the parameter at random. And two, once you have the parameter, conditionless parameter, you draw your data. Nobody believed this actually is happening, that nature is just rolling dice for us and choosing parameters at random. But what's happening is that, this idea that the parameter comes from some random distribution actually captures, very well, this idea that how you would encompass your prior. How would you say, my belief is as follows? Well here's an example about p. I'm 90% sure that p is between 0.4 and 0.6. And I'm 95% sure that p is between 0.3 and 0.8. So essentially, I have this possible value of p. And what I know is that, there's 90% here between 0.4 and 0.6. And then I have 0.3 and 0.8. And I know that I'm 95% sure that I'm in here. If you remember, this sort of looks like the kind of pictures that I made when I had some Gaussian, for example. And I said, oh here we have 90% of the observations. And here, we have 95% of the observations. So in a way, if I were able to tell you all those ranges for all possible values, then I would essentially describe a probability distribution for p. And what I'm saying is that, p is going to have this kind of shape. So of course, if I tell you only two twice this information that there's 90% I'm here, and I'm between here and here. And 95%, I'm between here and here, then there's many ways I can accomplish that, right. I could have something that looks like this, maybe. It could be like this. There's many ways I can have this. Some of them are definitely going to be mathematically more convenient than others. And hopefully, we're going to have things that I can parameterize very well. Because if I tell you this is this guy, then there's basically one, two three, four, five, six, seven parameters. So I probably don't want something that has seven parameters. But maybe I can say, oh, it's a Gaussian and I all I have to do is to tell you where it's centered and what the standard deviation is. So the idea of using this two layer thing, where we think of the parameter p as being drawn from some distribution, is really just a way for us to capture this information. Our prior belief being, well there's this percentage of chances that it's there. But the percentage of this chance, I'm not I'm deliberately not using probability here. So it's really a way to get close to this. That's why I say, the true parameter is not random. But the Bayesian approach does as if it was random. And then, just spits out a procedure out of this thought process, this thought experiment. So when you practice Bayesian statistics a lot, you start getting automatisms. You start getting some things that you do without really thinking about it. just like when you you're a statistician, the first thing you do is, can I think of this data as being Gaussian for example? When you're Bayesian you're thinking about, OK I have a set of parameters. So here, I can describe my parameter as being theta in general, in some big space parameter of theta. But what spaces did we encounter? Well, we encountered the real line. We encountered the interval 0, 1 for Bernoulli's And we encountered some of the positive real line for exponential distributions, etc. And so what I'm going to need to do, if I want to put some prior on those spaces, I'm going to have to have a usual set of tools for this guy, usual set of tools for this guy, usual sort of tools for this guy. And by usual set of tools, I mean I'm going to have to have a family of distributions that's supported on this. So in particular, this is the speed in which my parameter that I usually denote by p for Bernoulli lives. And so what I need is to find a distribution on the interval 0, 1 just like this guy. The problem with the Gaussian is that it's not on the interval 0, 1. It's going to spill out in the end. And it's not going to be something that works for me. And so the question is, I need to think about distributions that are probably continuous. Why would I restrict myself to discrete distributions that are actually convenient and for Bernoulli, one that's actually basically the main tool that everybody is using is the so-called beta distribution. So the beta distribution has two parameters. So x follows a beta with parameters a and b if it has a density, f of x is equal to x to the a minus 1. 1 minus x to the b minus 1, if x is in the interval 0, 1 and 0 for all other x's. OK? Why is that a good thing? Well, it's a density that's on the interval 0, 1 for sure. But now I have these two parameters and a set of shapes that I can get by tweaking those two parameters is incredible. It's going to be a unimodal distribution. It's still fairly nice. It's not going to be something that goes like this and this. Because if you think about this, what would it mean if your prior distribution of the interval 0, 1 had this shape? It would mean that, maybe you think that p is here or maybe you think that p is here, or maybe you think that p is here. Which essentially means that you think that p can come from three different phenomena. And there's other models that are called mixers for that, that directly account for the fact that maybe there are several phenomena that are aggregated in your data set. But if you think that your data set is sort of pure, and that everything comes from the same phenomenon, you want something that looks like this, or maybe looks like this, or maybe is sort of symmetric. You want to get all this stuff. Maybe you want something that says, well if I'm talking about p being the probability of the proportion of women in the whole world, you want something that's probably really spiked around one half. Almost the point math, because you know let's agree that 0.5 is the actual number. So you want something that says, OK maybe I'm wrong. But I'm sure I'm not going to be really that way off. So you want something that's really pointy. But if it's something you've never checked, and again I can not make references at this point, but something where you might have some uncertainty that should be around one half. Maybe you want something that a little more allows you to say, well, I think there's more around one half. But there's still some fluctuations that are possible. And in particular here, I talk about p, where the two parameters a and b are actually the same. I call them a. One is called scale. The other one is called shape. Oh sorry, this is not a density. So it actually has to be normalized. When you integrate this guy, it's going to be some function that depends on a and b, actually depends on this function through the beta function. Which is this combination of gamma function, so that's why it's called beta distribution. That's the definition of the beta function when you integrate this thing anyway. You just have to normalize it. That's just a number that depends on the a and b. So here, if you take a equal to b, you have something that essentially is symmetric around one half. Because what does it look like? Well, so my density f of x, is going to be what? It's going to be my constant times x, times one minus x to a minus one. And this function, x times 1 minus x looks like this. We've drawn it before. That was something that showed up as being the variance of my Bernoulli. So we know it's something that takes its maximum at one half. And now I'm just taking a power of this guy. So I'm really just distorting this thing into some fairly symmetric manner. This distribution that we actually take for p. I assume that p, the parameter, notice that this is kind of weird. First of all, this is probably the first time in this entire course that something has a distribution when it's actually a lower case letter. That's something you have to deal with, because we've been using lower case letters for parameters. And now we want them to have a distribution. So that's what's going to happen. This is called the prior distribution. So really, I should write something like f of p is equal to a constant times p, 1 minus p, to the n minus 1. Well no, actually I should not because then it's confusing. One thing in terms of notation that I'm going to write, when I have a constant here and I don't want to make it explicit. And we'll see in a second why I don't need to make it explicit. I'm going to write this as f of x is proportional to x 1 minus x to the n minus 1. That's just to say, equal to some constant that does not depend on x times this thing. So if we continue with our experiment where I'm drawing this data, X1 to Xn, which is Bernoulli p, if p has some distribution it's not clear what it means to have a Bernoulli with some random parameter. So what I'm going to do is, then I'm going to first draw my p. Let's say I get a number, 0.52. And then, I'm going to draw my data conditionally on p. So here comes the first and last flowchart of this class. So nature first draws p. p follows some data on a, a. Then I condition on p. And then I draw X1, Xn that are iid, Bernoulli p. Everybody understand the process of generating this data? So you first draw a parameter, and then you just flip those independent biased coins with this particular p. There's this layered thing. Now conditionally p, right so here I have this prior about p which was the thing. So this is just the thought process again, it's not anything that actually happens in practice. This is my way of thinking about how the data was generated. And from this, I'm going to try to come up with some procedure. Just like, if your estimator is the average of the data, you don't have to understand probability to say that my estimator is the average of the data. Anyone outside this room understands that the average is a good estimator for some average behavior. And they don't need to think of the data as being a random variable, et cetera. So same thing, basically. In this case, you can see that the posterior distribution is still a beta. What it means is that, I had this thing. Then, I observed my data. And then, I continue and here I'm going to update my prior into some posterior distribution, pi. And here, this guy is actually also a beta. My posterior distribution, p, is also a beta distribution with the parameters that are on this slide. And I'll have the space to reproduce them. So I start the beginning of this flowchart as having p, which is a prior. I'm going to get some observations and then, I'm going to update what my posterior is. This posterior is basically something that's, in business statistics was beautiful is as soon as you have this distribution, it's essentially capturing all the information about the data that you want for p. And it's not just the point. It's not just an average. It's actually an entire distribution for the possible values of theta. And it's not the same thing as saying, well if theta hat is equal to Xn bar, in the Gaussian case I know that this is some mean, mu. And then maybe it has varying sigma squared over n. That's not what I mean by, this is my posterior distribution. This is not what I mean. This is going to come from this guy, the Gaussian thing and the central limit theorem. But what I mean is this guy. And this came exclusively from the prior distribution. If I had another prior, I would not necessarily have a beta distribution on the output. So when I have the same family of distributions at the beginning and at the end of this flowchart, I say that beta is a conjugate prior. Meaning I put in beta as a prior and I get beta as [INAUDIBLE] And that's why betas are so popular. Conjugate priors are really nice, because you know that whatever you put in, what you're going to get in the end is a beta. So all you have to think about is the parameters. You don't have to check again what the posterior is going to look like, what the PDF of this guy is going to be. You don't have to think about it. You just have to check what the parameters are. And there's families of conjugate priors. Gaussian gives Gaussian, for example. There's a bunch of them. And this is what drives people into using specific priors as opposed to others. It has nice mathematical properties. Nobody believes that p is really distributed according to beta. But it's flexible enough and super convenient mathematically. Now let's see for one second, before we actually go any further. I didn't mention A and B are both in here, A and B are both positive numbers. They can be anything positive. So here what I did is that, I updated A into a plus the sum of my data, and b into b plus n minus the sum of my data. So that's essentially, a becomes a plus the number of ones. Well, that's only when I have a and a. So the first parameters become itself plus the number of ones. And the second one becomes itself plus the number of zeros. And so just as a sanity check, what does this mean? If a it goes to zero, what is the beta when a goes to 0? We can actually read this from here. Actually, let's take a goes to-- no. Sorry, let's just do this. I'll do it when we talk about non-informative prior, because it's a little too messy. How do we do this? How did I get this posterior distribution, given the prior? How do I update This well this is called Bayesian statistics. And you've heard this word, Bayes before. And the way you've heard it is in the Bayes formula. What was the Bayes formula? The Bayes formula was telling you that the probability of A, given B was equal to something that depended on the probability of B, given A. That's what it was. You can actually either remember the formula or you can remember the definition. And this is what p of A and B divided by p of B. So this is p of B, given A times p of A divided by p of B. That's what Bayes formula is telling you. Agree? So now what I want is to have something that's telling me how this is going to work. What is going to play the role of those events, A and B? Well one is going to be, this is going to be the distribution of my parameter of theta, given that I see the data. And this is going to tell me, what is the distribution of the data, given that I know what my parameter if theta is. But that part, if this is theta and this is the parameter of theta, this is what we've been doing all along. The distribution of the data, given the parameter here was n iid Bernoulli p. I knew exactly what their joint probability mass function is. Then, that was what? So we said that this is going to be my data and this is going to be my parameter. So that means that, this is the probability of my data, given the parameter. This is the probability of the parameter. What is this? What did we call this? This is the prior. It's just the distribution of my parameter. Now what is this? Well, this is just the distribution of the data, itself. This is essentially the distribution of this, if this was indeed not conditioned on p. So if I don't condition on p, this data is going to be a bunch of iid, Bernoulli with some parameter. But the perimeter is random, right. So for different realization of this data set, I'm going to get different parameters for the Bernoulli. And so that leads to some sort of convolution. It's not really a convolution in this case, but it's like some sort of composition of distributions. I have the randomness that comes from here and then, the randomness that comes from realizing the Bernoulli. That's just the marginal distribution. It actually might be painful to understand what this is, right. In a way, it's sort of a mixture and it's not super nice. But we'll see that this actually won't matter for us. This is going to be some number. It's going to be there. But it will matter for us, what it is. Because it actually does not depend on the parameter. And that's all that matters to us. Let's put some names on those things. This was very informal. So let's put some actual names on what we call prior. So what is the formal definition of a prior, what is the formal definition of a posterior, and what are the rules to update it? So I'm going to have my data, which is going to be X1, Xn. Let's say they are iid, but they don't actually have to. And so I'm going to have given, theta. And when I say given, it's either given like I did in the first part of this course in all previous chapters, or conditionally on. If you're thinking like a Bayesian, what I really mean is conditionally on this random parameter. It's as if it was a fixed number. They're going to have a distribution, X1, Xn is going to have some distribution. Let's assume for now it's a PDF, pn of X1, Xn. I'm going to write theta like this. So for example, what is this? Let's say this is a PDF. It could be a PMF. Everything I say, I'm going to think of them as being PDF's. I'm going to combine PDF's with PDF's, but I could combine PDF it PMF, PMF with PDF's or PMF with PMF. So everywhere you see a D could be an M. Now I have those things. So what does it mean? So here is an example. X1, Xn or iid, and theta 1. Now I know exactly what the joint PDF of this thing is. It means that pn of X1, Xn given theta is equal to what? Well it's 1 over 2pi to the power n e, to the minus sum from i equal 1 to n of xi minus theta squared divided by 2. So that's just the joint distribution of n iid and theta 1, random variables. That's my pn given theta. Now this is what we denoted by f sub theta before. We had the subscript before, but now we just put a bar in theta because we want to remember that this is actually conditioned on theta. But this is just notation. You should just think of this as being, just the usual thing that you get from some statistical model. Now, that's going to be pn. Theta has prior distribution, pi. For example, so think of it as either PDF or PMF again. For example, pi of theta was what? Well it was some constant times theta to the a minus 1, 1 minus theta to a minus 1. So it has some prior distribution, and that's another PMF. So now I'm given the distribution of my, x is given theta and given the distribution of my theta. I'm given this guy. That's this guy. I'm given that guy, which is my pi. So that's my pn of X1, Xn given theta. That's my pi of theta. Well, this is just the integral of pn of X1, Xn times pi of theta, d theta, over all possible sets of theta. That's just when I integrate out my theta, or I compute the marginal distribution, I did this by integrating. That's just basic probability, conditional probabilities. Then if I had the PMF, I would just sum over the values of thetas. Now what I want is to find what's called, so that's the prior distribution, and I want to find the posterior distribution. It's pi of theta, given X1, Xn. If I use Bayes' rule I know that this is pn of X1, Xn, given theta times pi of theta. And then it's divided by the distribution of those guys, which I will write as integral over theta of pn, X1, Xn, given theta times pi of theta, d theta. Everybody's with me, still? If you're not comfortable with this, it means that you probably need to go read your couple of pages on conditional densities and conditional PMF's from your probably class. There's really not much there. It's just a matter of being able to define those quantities, f density of x, given y. This is just what's called a conditional density. You need to understand what this object is and how it relates to the joint distribution of x and y, or maybe the distribution of x or the distribution of y. But it's the same rules. One way to actually remember this is, this is exactly the same rules as this. When you see a bar, it's the same thing as the probability of this and this guy. So for densities, it's just a comma divided by the second the probably the second guy. That's it. So if you remember this, you can just do some pattern matching and see what I just wrote here. Now, I can compute every single one of these guys. This something I get from my modeling. So I did not write this. It's not written in the slides. But I give a name to this guy that was my prior distribution. And that was my posterior distribution. In chapter three, maybe what did we call this guy? The one that does not have a name and that's in the box. What did we call it? AUDIENCE: [INAUDIBLE] PHILLIPE RIGOLLET: It is the joint distribution of the Xi's. And we gave it a name. AUDIENCE: [INAUDIBLE] PHILLIPE RIGOLLET: It's the likelihood, right? This is exactly the likelihood. This was the likelihood of theta. And this is something that's very important to remember, and that really reminds you that these things are really not that different. Maximum likelihood estimation and Bayesian estimation, because your posterior is really just your likelihood times something that's just putting some weights on the thetas, depending on where you think theta should be. If I had, say a maximum likelihood estimate, and my likelihood and theta looked like this, but my prior and theta looked like this. I said, oh I really want thetas that are like this. So what's going to happen is that, I'm going to turn this into some posterior that looks like this. So I'm just really waiting, this posterior, this is a constant that does not depend on theta right? Agreed? I integrated over theta, so theta is gone. So forget about this guy. I have basically, that the posterior distribution up to scaling, because it has to be a probability density and not just anything any function that's positive, is the product of this guy. It's a weighted version of my likelihood. That's all it is. I'm just weighing the likelihood, using my prior belief on theta. And so given this guy a natural estimator, if you follow the maximum likelihood principle, would be the maximum of this posterior. Agreed? That would basically be doing exactly what maximum likelihood estimation is telling you. So it turns out that you can. It's called Maximum A Posteriori, and I won't talk much about this, or MAP. That's Maximum a Posteriori. So it's just the theta hat is the arc max of pi theta, given X1, Xn. And it sounds like it's OK. I'll give you a density and you say, OK I have a density for all values of my parameters. You're asking me to summarize it into one number. I'm just going to take the most likely number of those guys. But you could summarize it, otherwise. You could take the average. You could take the median. You could take a bunch of numbers. And the beauty of Bayesian statistics is that, you don't have to take any number in particular. You have an entire posterior distribution. This is not only telling you where theta is, but it's actually telling you the difference if you actually give as something that gives you the posterior. Now, let's say the theta is p between 0 and 1. If my posterior distribution looks like this, or my posterior distribution looks like this, then those two guys have one, the same mode. This is the same value. And their symmetric, so they'll also have the same mean. So these two posterior distributions give me the same summary into one number. However clearly, one is much more confident than the other one. So I might as well just spit it out as a solution. You can do even better. People actually do things, such as drawing a random number from this distribution. Say, this is my number. That's kind of dangerous, but you can imagine you could do this. This is what works. That's what we went through. So here, as you notice I don't care so much about this part here. Because it does not depend on theta. So I know that given the product of those two things, this thing is only the constant that I need to divide so that when I integrate this thing over theta, it integrates to one. Because this has to be a probability density on theta. I can write this and just forget about that part. And that's what's written on the top of this slide. This notation, this sort of weird alpha, or I don't know. Infinity sign propped to the right. Whatever you want to call this thing is actually just really emphasizing the fact that I don't care. I write it because I can, but you know what it is. In some instances, you have to compute the integral. In some instances, you don't have to compute the integral. And a lot of Bayesian computation is about saying, OK it's actually really hard to compute this integral, so I'd rather not doing it. So let me try to find some methods that will allow me to sample from the posterior distribution, without having to compute this. And that's what's called Monte-Carlo Markov chains, or MCMC, and that's exactly what they're doing. They're just using only ratios of things, like that for different thetas. And which means that if you take ratios, the normalizing constant is gone and you don't need to find this integral. So we won't go into those details at all. That would be the purpose of an entire course on Bayesian inference. Actually, even Bayesian computations would be an entire course on its own. And there's some very interesting things that are going on there, the interface of stats and computation. So let's go back to our example and see if we can actually compute any of those things. Because it's very nice to give you some data, some formulas. Let's see if we can actually do it. In particular, can I actually recover this claim that the posterior associated to a beta prior with a Bernoulli likelihood is actually giving me a beta again? What was my prior? So p was following a beta AA, which means that p, the density. That was pi of theta. Well I'm going to write this as pi of p-- was proportional to p to the A minus 1 times 1 minus p to the A minus 1. So that's the first ingredient I need to complete my posterior. I really need only two, if I wanted to bound up to constant. The second one was p hat. We've computed that many times. And we had even a nice compact way of writing it, which was that pn of X1, Xn, given the parameter p. So the joint density of my data, given p, that's my likelihood. The likelihood of p was what? Well it was p to the sum of Xi's. 1 minus p to the n minus some of the Xi's. Anybody wants me to parse this more? Or do you remember seeing that from maximum likelihood estimation? Yeah? AUDIENCE: [INAUDIBLE] PHILLIPE RIGOLLET: That's what conditioning does. AUDIENCE: [INAUDIBLE] previous slide. [INAUDIBLE] bottom there, it says D pi of t. Shouldn't it be dt pi of t? PHILLIPE RIGOLLET: So D pi of T is a measure theoretic notation, which I used without thinking. And I should not because I can see it upsets you. D pi of T is just a natural way to say that I integrate against whatever I'm given for the prior of theta. In particular, if theta is just the mix of a PDF and a point mass, maybe I say that my p takes value 0.5 with probability 0.5. And then is uniform on the interval with probability 0.5. For this, I neither have a PDF nor a PMF. But I can still talk about integrating with respect to this, right? It's going to look like, if I take a function f of T, D pi of T is going to be one half of f of one half. That's the point mass with probability one half, at one half. Plus one half of the integral between 0 and 1, of f of TDT. This is just the notation, which is actually funnily enough, interchangeable with pi of DT. But if you have a density, it's really just the density pi of TDT. If pi is really a density, but that's when it's when pi is and measure and not a density. Everybody else, forget about this. This is not something you should really worry about at this point. This is more graduate level probability classes. But yeah, it's called measure theory. And that's when you think of pi as being a measure in an abstract fashion. You don't have to worry whether it's a density or not, or whether it has a density. So everybody is OK with this? Now I need to compute my posterior. And as I said, my posterior is really just the product of the likelihood weighted by the prior. Hopefully, at this stage of your application, you can multiply two functions. So what's happening is, if I multiply this guy with this guy, p gets this guy to the power this guy plus this guy. And then 1 minus p gets the power n minus some of Xi's. So this is always from I equal 1 to n. And then plus A minus 1 as well. This is up to constant, because I still need to solve this. And I could try to do it. But I really don't have to, because I know that if my density has this form, then it's a beta distribution. And then I can just go on Wikipedia and see what should be the normalization factor. But I know it's going to be a beta distribution. It's actually the beta with parameter. So this is really my beta with parameter, sum of Xi, i equal 1 to n plus A minus 1. And then the second parameter is n minus sum of the Xi's plus A minus 1. I just wrote what was here. What happened to my one? Oh no, sorry. Beta has the power minus 1. So that's the parameter of the beta. And this is the parameter of the beta. Beta is over there, right? So I just replace A by what I see. A is just becoming this guy plus this guy and this guy plus this guy. Everybody is comfortable with this computation? We just agreed that beta priors for Bernoulli observations are certainly convenient. Because they are just conjugate, and we know that's what is going to come out in the end. That's going to be a beta as well. I just claim it was convenient. It was certainly convenient to compute this, right? There was certainly some compatibility when I had to multiply this function by that function. And you can imagine that things could go much more wrong, than just having p to some power and p to some power, 1 minus p to some power, when it might just be some other power. Things were nice. Now this is nice, but I can also question the following things. Why beta, for one? The beta tells me something. That's convenient, but then how do I pick A? I know that A should definitely capture the fact that where I want to have my p most likely located. But it also actually also captures the variance of my beta. And so choosing different As is going to have different functions. If I have A and B, If I started with the beta with parameter. If I started with a B here, I would just pick up the B here. Agreed? And that would just be a symmetric. But they're going to capture mean and variance of this thing. And so how do I pick those guys? If I'm a doctor and you're asking me, what do you think the chances of this drug working in this kind of patients is? And I have to spit out the parameters of a beta for you, it might be a bit of a complicated thing to do. So how do you do this, especially for problems? So by now, people have actually mastered the art of coming up with how to formulate those numbers. But in new problems that come up, how do you do this? What happens if you want to use Bayesian methods, but you actually do not know what you expect to see? To be fair, before we started this class, I hope all of you had no idea whether people tend to bend their head to the right or to the left before kissing. Because if you did, well you have too much time on your hands and I should double your homework. So in this case, maybe you still want to use the Bayesian machinery. Maybe you just want to do something nice. It's nice right, I mean it worked out pretty well. What if you want to do? Well you actually want to use some priors that carry no information, that basically do not prefer any theta to another theta. Now, you could read this slide or you could look at this formula. We just said that this pi here was just here to weigh some thetas more than others, depending on their prior belief. If our prior belief does not want to put any preference towards some thetas than to others, what do I do? AUDIENCE: [INAUDIBLE] PHILLIPE RIGOLLET: Yeah, I remove it. And the way to remove something we multiply by, is just replace it by one. That's really what we're doing. If this was a constant not depending on theta, then that would mean that we're not preferring any theta. And we're looking at the likelihood. But not as a function that we're trying to maximize, but it is a function that we normalize in such a way that it's actually a distribution. So if I have pi, which is not here, this is really just taking the like likelihood, which is a positive function. It may not integrate to 1, so I normalize it so that it integrates to 1. And then I just say, well this is my posterior distribution. Now I could just maximize this thing and spit out my maximum likelihood estimator. But I can also integrate and find what the expectation of this guy is. I can find what the median of this guy is. I can sample data from this guy. I can build, understand what the variance of this guy is. Which is something we did not do when we just did maximum likelihood estimation because given a function, all we cared about was the arc max of this function. These priors are called uninformative. This is just replacing this number by one or by a constant. Because it still has to be a density. If I have a bounded set, I'm just looking for the uniform distribution on this bounded set, the one that puts constant one over the size of this thing. But if I have an invalid set, what is the density that takes a constant value on the entire real line, for example? What is this density? AUDIENCE: [INAUDIBLE] PHILLIPE RIGOLLET: Doesn't exist, right? It just doesn't exist. The way you can think of it is a Gaussian with the variance going to infinity, maybe, or something like this. But you can think of it in many ways. You can think of the limit of the uniform between minus T and T, with T going to infinity. But this thing is actually zero. There's nothing there. You can actually still talk about this. You could always talk about this thing, where you think of this guy as being a constant, remove this thing from this equation, and just say, well my posterior is just the likelihood divided by the integral of the likelihood over theta. And if theta is the entire real line, so be it. As long as this integral converges, you can still talk about this stuff. This is what's called an improper prior. An improper prior is just a non-negative function defined in theta, but it does not have to integrate neither to one, nor to anything. If I integrate the function equal to 1 on the entire real line, what do I get? Infinity. It's not a proper prior, and it's called and improper prior. And those improper priors are usually what you see when you start to want non-informative priors on infinite sets of datas. That's just the nature of it. You should think of them as being the uniform distribution of some infinite set, if that thing were to exist. Let's see some examples about non-informative priors. If I'm in the interval 0, 1 this is a finite set. So I can talk about the uniform prior on the interval 0, 1 for a parameter, p of a Bernoulli. If I want to talk about this, then it means that my prior is p follows some uniform on the interval 0, 1. So that means that f of x is 1 if x is in 0, 1. Otherwise, there is actually not even a normalization. This thing integrates to 1. And so now if I look at my likelihood, it's still the same thing. So my posterior becomes theta X1, Xn. That's my posterior. I don't write the likelihood again, because we still have it-- well we don't have it here anymore. The likelihood is given here. Copy, paste over there. The posterior is just this thing times 1. So you will see it in a second. So it's p to the power sum of the Xi's, one minus p to the power, n minus sum of the Xi's. And then it's multiplied by 1, and then divided by this integral between 0 and 1 of p, sum of the Xi's. 1 minus p, n minus sum of the Xi's. Dp, which does not depend on p. And I really don't care what the thing actually is. That's posterior of p. And now I can see, well what is this? It's actually just the beta with parameters. This guy plus 1. And this guy plus 1. I didn't tell you what the expectation of a beta was. We don't know what the expectation of a beta is, agreed? If I wanted to find say, the expectation of this thing that would be some good estimator, we know that the maximum of this guy-- what is the maximum of this thing? Well, it's just this thing, it's the average of the Xi's. That's just the maximum likelihood estimator for Bernoulli. We know it's the average. Do you think if I take the expectation of this thing, I'm going to get the average? So actually, I'm not going to get the average. I'm going to get this guy plus this guy, divided by n plus 1. Let's look at what this thing is doing. It's looking at the number of ones and it's adding one. And this guy is looking at the number of zeros and it's adding one. Why is it adding this one? What's going on here? This is going to matter mostly when the number of ones is actually zero, or the number of zeros is zero. Because what it does is just pushes the zero from non-zero. And why is that something that this Bayesian method actually does for you automatically? It's because when we put this non-informative prior on p, which was uniform on the interval 0, 1. In particular, we know that the probability that p is equal to 0 is zero. And the probability p is equal to 1 is zero. And so the problem is that if I did not add this 1 with some positive probability, I wouldn't be allowed to spit out something that actually had p hat, which was equal to 0. If by chance, let's say I have n is equal to 3, and I get only 0, 0, 0, that could happen with probability. 1 over pq, one over 1 minus pq. That's not something that I want. And I'm using my priors. My prior is not informative, but somehow it captures the fact that I don't want to believe p is going to be either equal to 0 or 1. So that's sort of taken care of here. So let's move away a little bit from the Bernoulli example, shall we? I think we've seen enough of it. And so let's talk about the Gaussian model. Let's say I want to do Gaussian inference. I want to do inference in a Gaussian model, using Bayesian methods. What I want is that Xi, X1, Xn, or say 0, 1 iid. Sorry, theta 1, iid conditionally on theta. That means that pn of X1, Xn, given theta is equal to exactly what I wrote before. So 1 square root to pi, to the n exponential minus one half sum of Xi minus theta squared. So that's just the joint distribution of my Gaussian with mean data. And the another question is, what is the posterior distribution? Well here I said, let's use the uninformative prior, which is an improper prior. It puts weight on everyone. That's the so-called uniform on the entire real line. So that's certainly not a density. But it can still just use this. So all I need to do is get this divided by normalizing this thing. But if you look at this, essentially I want to understand. So this is proportional to the exponential minus one half sum from I equal 1 to n of Xi minus theta squared. And now I want to see this thing as a density, not on the Xi's but on theta. What I want is a density on theta. So it looks like I have chances of getting something that looks like a Gaussian. To have a Gaussian, I would need to see minus one half. And then I would need to see theta minus something here, not just the sum of something minus thetas. So I need to work a little bit more, to expand the square here. So this thing here is going to be equal to exponential minus one half sum from I equal 1 to n of Xi squared minus 2Xi theta plus theta squared. Now what I'm going to do is, everything remember is up to this little sign. So every time I see a term that does not depend on theta, I can just push it in there and just make it disappear. Agreed? This term here, exponential minus one half sum of Xi squared, does it depend on theta? No. So I'm just pushing it here. This guy, yes. And the other one, yes. So this is proportional to exponential sum of the Xi. And then I'm going to pull out my theta, the minus one half canceled with the minus 2. And then I have minus one half sum from I equal 1 to n of theta squared. Agreed? So now what this thing looks like, this looks very much like some theta minus something squared. This thing here is really just n over 2 times theta. Sorry, times theta squared. So now what I need to do is to write this of the form, theta minus something. Let's call it mu, squared, divided by 2 sigma squared. I want to turn this into that, maybe up to terms that do not depend on theta. That's what I'm going to try to do. So that's called completing the squaring. That's some exercises you do. You've done it probably, already in the homework. And that's something you do a lot when you do Bayesian statistics, in particular. So let's do this. What is it going to be the leading term? Theta squared is going to be multiplied by this thing. So I'm going to pull out my n over 2. And then I'm going to write this as minus theta over 2. And then I'm going to write theta minus something squared. And this something is going to be one half of what I see in the cross-product. I need to actually pull this thing out. So let me write it like that first. So that's theta squared. And then I'm going to write it as minus 2 times 1 over n sum from I equal 1 to n of Xi's times theta. That's exactly just a rewriting of what we had before. And that should look much more familiar. A squared minus 2 blap A, and then I missed something. So this thing, I'm going to be able to rewrite as theta minus Xn bar squared. But then I need to remove the square of Xn bar. Because it's not here. So I just complete the square. And then I actually really don't care with this thing actually was, because it's going to go again in the little Alpha's sign over there. So this thing eventually is going to be proportional to exponential of minus n over 2 times theta of minus Xn bar squared. And so we know that if this is a density that's proportional to this guy, it has to be some n with mean, Xn bar. And variance, this is supposed to be 1 over sigma squared. This guy over here, this n. So that's really just 1 over n. So the posterior distribution is a Gaussian centered at the average of my observations. And with variance, 1 over n. Everybody's with me? Why I'm saying this, this was the output of some computation. But it sort of makes sense, right? It's really telling me that the more observations I have, the more concentrated this posterior is. Concentrated around what? Well around this Xn bar. That looks like something we've sort of seen before. But it does not have the same meaning, somehow. This is really just the posterior distribution. It's sort of a sanity check, that I have this 1 over n when I have Xn bar. But it's not the same thing as saying that the variance of Xn bar was 1 over n, like we had before. As an exercise, I would recommend if you don't get it, just try pi of theta to be equal to some n mu 1. Here, the prior that we used was completely non-informative. What happens if I take my prior to be some Gaussian, which is centered at mu and it has the same variance as the other guys? So what's going to happen here is that we're going to put a weight. And everything that's away from mu is going to actually get less weight. I want to know how I'm going to be updating this prior into a posterior. Everybody sees what I'm saying here? So that means that pi of theta has the density proportional to exponential minus one half theta minus mu squared. So I need to multiply my posterior with this, and then see. It's actually going to be a Gaussian. This is also a conjugate prior. It's going to spit out another Gaussian. You're going to have to complete a square again, and just check what it's actually giving you. And so spoiler alert, it's going to look like you get an extra observation, which is actually equal to mu. It's going to be the average of n plus 1 observations. The first n1's being X1 to Xn. And then, the last one being mu. And it sort of makes sense. That's actually a fairly simple exercise. Rather than going into more computation, this is something you can definitely do when you're in the comfort of your room. I want to talk about other types of priors. The first thing I said is, there's this beta prior that I just pulled out of my hat and that was just convenient. Then there was this non-informative prior. It was convenient. It was non-informative, so if you don't know anything else maybe that's what you want to do. The question is, are there any other priors that are sort of principled and generic, in the sense that the uninformative prior was generic, right? It was equal to 1, that's as generic as it gets. So is there anything that's generic as well? Well, there's this priors that are called Jeffrey's priors. And Jeffrey's prior, which is proportional to square root of the determinant of the Fisher information of theta. This is actually a weird thing to do. It says, look at your model. Your model is going to have a Fisher information. Let's say it exists. Because we know it does not always exist. For example, in the multinomial model, we didn't have a Fisher information. The determinant of a matrix is somehow measuring the size of a matrix. If you don't trust me, just think about the matrix being of size one by one, then the determinant is just the number that you have there. And so this is really something that looks like the Fisher information. It's proportional to the amount of information that you have at a certain point. And so what my prior is saying well, I want to put more weights on those thetas that are going to just extract more information from the data. You can actually compute those things. In the first example, Jeffrey's prior is something that looks like this. In one dimension, Fisher information is essentially one the word variance. That's just 1 over the square root of the variance, because I have the square root. And when I have the Jeffrey's prior, when I have the Gaussian case, this is the identity matrix that I would have in the Gaussian case. The determinant of the identities is 1. So square root of 1 is 1, and so I would basically get 1. And that gives me my improper prior, my uninformative prior that I had. So the uninformative prior 1 is fine. Clearly, all the thetas carry the same information in the Gaussian model. Whether I translate it here or here, it's pretty clear none of them is actually better than the other. But clearly for the Bernoulli case, the p's that are closer to the boundary carry more information. I sort of like those guys, because they just carry more information. So what I do is, I take this function. So p1 minus p. Remember, it's something that looks like this. On the interval 0, 1. This guy, 1 over square root of p1 minus p is something that looks like this. Agreed What it's doing is sort of wants to push towards the piece that actually carry more information. Whether you want to bias your data that way or not, is something you need to think about. When you put a prior on your data, on your parameter, you're sort of biasing towards this idea your data. That's maybe not such a good idea, when you have some p that's actually close to one half, for example. You're actually saying, no I don't want to see a p that's close to one half. Just make a decision, one way or another. But just make a decision. So it's forcing you to do that. Jeffrey's prior, I'm running out of time so I don't want to go into too much detail. We'll probably stop here, actually. So Jeffrey's priors have this very nice property. It's that they actually do not care about the parameterization of your space. If you actually have p and you suddenly decide that p is not the right parameter for Bernoulli, but it's p squared. You could decide to parameterize this by p squared. Maybe your doctor is actually much more able to formulate some prior assumption on p squared, rather than p. You never know. And so what happens is that Jeffrey's priors are an invariant in this. And the reason is because the information carried by p is the same as the information carried by p squared, somehow. They're essentially the same thing. You need to have one to one map. Where you basically for each parameter, before you have another parameter. Let's call Eta the new parameters. The PDF of the new prior indexed by Eta this time is actually also Jeffrey's prior. But this time, the new Fisher information is not the Fisher information with respect to theta. But it's this Fisher information associated to this statistical model indexed by Eta. So essentially, when you change the parameterization of your model, you still get Jeffrey's prior for the new parameterization. Which is, in a way, a desirable property. Jeffrey's prior is just an uninformative priors, or priors you want to use when you want a systematic way without really thinking about what to pick for your mile. I'll finish this next time. And we'll talk about Bayesian confidence regions. We'll talk about Bayesian estimation. Once I have a posterior, what do I get? And basically, the only message is going to be that you might want to integrate against the posterior. Find the posterior, the expectation of your posterior distribution. That's a good point estimator for theta. We'll just do a couple of computation.
MIT_18650_Statistics_for_Applications_Fall_2016
19_Principal_Component_Analysis.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: --bunch of x's and a bunch of y's. The y's were univariate, just one real valued random variable. And the x's were vectors that described a bunch of attributes for each of our individuals or each of our observations. Let's assume now that we're given essentially only the x's. This is sometimes referred to as unsupervised learning. There is just the x's. Usually, supervision is done by the y's. And so what you're trying to do is to make sense of this data. You're going to try to understand this data, represent this data, visualize this data, try to understand something, right? So, if I give you a d-dimensional random vectors, and you're going to have n independent copies of this individual-- of this random vector, OK? So you will see that I'm going to have-- I'm going to very quickly run into some limitations about what I can actually draw on the board because I'm using [? boldface ?] here. I'm also going to use the blackboard [? boldface. ?] So it's going to be a bit difficult. So tell me if you're actually a little confused by what is a vector, what is a number, and what is a matrix. But we'll get there. So I have X in Rd, and that's a random vector. And I have X1 to Xn that are IID. They're independent copies of X. OK, so you can think of those as being-- the realization of these guys are going to be a cloud of n points in R to the d. And we're going to think of d as being fairly large. And for this to start to make sense, we're going to think of d as being at least 4, OK? And meaning that you're going to have a hard time visualizing those things. If it was 3 or 2, you would be able to draw these points. And that's pretty much as much sense you're going to be making about those guys, just looking at the [INAUDIBLE] All right, so I'm going to write each of those X's, right? So this vector, X, has d coordinate. And I'm going to write them as X1, to Xd. And I'm going to stack them into a matrix, OK? So once I have those guys, I'm going to have a matrix. But here, I'm going to use the double bar. And it's X1 transpose, Xn transpose. So what it means is that the coordinates of this guy, of course, are X1,1. Here, I have-- I'm of size d, so I have X1d. And here, I have Xn1. Xnd. And so the i-th, j-th-- i-th row and j-th column is the matrix, Xij, right-- is the entry, Xi to-- sorry. OK, so each-- so the rows here are the observations. And the columns are the covariance over attributes. OK? So this is an n by d matrix. All right, this is really just some bookkeeping. How do we store this data somehow? And the fact that we use a matrix just like for regression is going to be convenient because we're going to able to talk about projections-- going to be able to talk about things like this. All right, so everything I'm going to say now is about variances or covariances of those things, which means that I need two moments, OK? If the variance does not exist, there's nothing I can say about this problem. So I'm going to assume that the variance exists. And one way to just put it to say that the two norm of those guys is finite, which is another way to say that each of them is finite. I mean, you can think of it the way you want. All right, so now, the mean of X, right? So I have a random vector. So I can talk about the expectation of X. That's a vector that's in Rd. And that's just taking the expectation entrywise. Sorry. X1, Xd. OK, so I should say it out loud. For this, the purpose of this class, I will denote by subscripts the indices that corresponds to observations. And superscripts, the indices that correspond to coordinates of a variable. And I think that's the same convention that we took for the regression case. Of course, you could use whatever you want. If you want to put commas, et cetera, it becomes just a bit more complicated. All right, and so now, once I have this, so this tells me where my cloud of point is centered, right? So if I have a bunch of points-- OK, so now I have a distribution on Rd, so maybe I should talk about this-- I'll talk about this when we talk about the empirical version. But if you think that you have, say, a two-dimensional Gaussian random variable, then you have a center in two dimension, which is where it peaks, basically. And that's what we're talking about here. But the other thing we want to know is how much does it spread in every direction, right? So in every direction of the two dimensional thing, I can then try to understand how much spread I'm getting. And the way you measure this is by using covariance, right? So the covariance matrix, sigma-- that's a matrix which is d by d. And it records-- in the j, k-th entry, it records the covariance between the j-th coordinate of X and the k-th coordinate of X, OK? So with entries-- OK, so I have sigma, which is sigma 1,1, sigma dd, sigma 1d, sigma d1. OK, and here I have sigma jk And sigma jk is just the covariance between Xj, the j-th coordinate and the k-th coordinate. OK? So in particular, it's symmetric because the covariance between Xj and Xk is the same as the covariance between Xk and Xj. I should not put those parentheses here. I do not use them in this, OK? Just the covariance matrix. So that's just something that records everything. And so what's nice about the covariance matrix is that if I actually give you X as a vector, you actually can build the matrix just by looking at vectors times vectors transpose, rather than actually thinking about building it coordinate by coordinate. So for example, if you're used to using MATLAB, that's the way you want to build a covariance matrix because MATLAB is good at manipulating vectors and matrices rather than just entering it entry by entry. OK, so, right? So, what is the covariance between Xj and Xk? Well by definition, it's the expectation of Xj and Xk minus the expectation of Xj times the expectation of Xk, right? That's the definition of the covariance. I hope everybody's seeing that. And so, in particular, I can actually see that this thing can be written as-- sigma can now be written as the expectation of XX transpose minus the expectation of X times the expectation of X transpose. Why? Well, let's look at the jk-th coefficient of this guy, right? So here, if I look at the jk-th coefficient, I see what? Well, I see that it's the expectation of XX transpose jk, which is equal to the expectation of XX transpose jk. And what are the entries of XX transpose? Well, they're of the form, Xj times Xk exactly. So this is actually equal to the expectation of Xj times Xk. And this is actually not the way I want to write it. I want to write it-- OK? Is that clear? That when I have a rank 1 matrix of this form, XX transpose, the entries are of this form, right? Because if I take-- for example, think about x, y, z, and then I multiply by x, y, z. What I'm getting here is x-- maybe I should actually use indices here. x1, x2, x3. x1, x2, x3. The entries are x1x1, x1x2, x1x3; x2x1, x2x2, x2x3; x3x1, x3x2, x3x3, OK? So indeed, this is exactly of the form if you look at jk, you get exactly Xj times Xk, OK? So that's the beauty of those matrices. So now, once I have this, I can do exactly the same thing, except that here, if I take the jk-th entry, I will get exactly the same thing, except that it's not going to be the expectation of the product, but the product of the expectation, right? So I get that the jk-th entry of E of X, E of X transpose, is just the j-th entry of E of X times the k-th entry of E of X. So if I put those two together, it's actually telling me that if I look at the j, k-th entry of sigma, which I called little sigma jk, then this is actually equal to what? It's equal to the first term minus the second term. The first term is the expectation of Xj, Xk minus the expectation of Xj, expectation of Xk, which-- oh, by the way, I forgot to say this is actually equal to the expectation of Xj times the expectation of Xk because that's just the definition of the expectation of random vectors. So my j and my k are now inside. And that's by definition the covariance between Xj and Xk, OK? So just if you've seen those manipulations between vectors, hopefully you're bored out of your mind. And if you have not, then that's something you just need to get comfortable with, right? So one thing that's going to be useful is to know very quickly what's called the outer product of a vector with itself, which is the vector of times the vector transpose, what the entries of these things are. And that's what we've been using on this second set of boards. OK, so everybody agrees now that we've sort of showed that the covariance matrix can be written in this vector form. So expectation of XX transpose minus expectation of X, expectation of X transpose. OK, just like the covariance can be written in two ways, right we know that the covariance can also be written as the expectation of Xj minus expectation of Xj times Xk minus expectation of Xk, right? That's the-- sometimes, this is the original definition of covariance. This is the second definition of covariance. Just like you have the variance which is the expectation of the square of X minus c of X, or the expectation X squared minus the expectation of X squared. It's the same thing for covariance. And you can actually see this in terms of vectors, right? So this actually implies that you can also rewrite sigma as the expectation of X minus expectation of X times the same thing transpose. Right? And the reason is because if you just distribute those guys, this is just the expectation of XX transpose minus X, expectation of X transpose minus expectation of XX transpose. And then I have plus expectation of X, expectation of X transpose. Now, things could go wrong because the main difference between matrices slash vectors and numbers is that multiplication does not commute, right? So in particular, those two things are not the same thing. And so that's the main difference that we have before, but it actually does not matter for our problem. It's because what's happening is that if when I take the expectation of this guy, then it's actually the same as the expectation of this guy, OK? And so just because the expectation is linear-- so what we have is that sigma now becomes equal to the expectation of XX transpose minus the expectation of X, expectation of X transpose minus expectation of X, expectation of X transpose. And then I have-- well, really, what I have is this guy. And then I have plus the expectation of X, expectation of X transpose. And now, those three things are actually equal to each other just because the expectation of X transpose is the same as the expectation of X transpose. And so what I'm left with is just the expectation of XX transpose minus the expectation of X, expectation of X transpose, OK? So same thing that's happening when you want to prove that you can write the covariance either this way or that way. The same thing happens for matrices, or for vectors, right, or a covariance matrix. They go together. Is there any questions so far? And if you have some, please tell me, because I want to-- I don't know to which extent you guys are comfortable with this at all or not. OK, so let's move on. All right, so of course, this is what I'm describing in terms of the distribution right here. I took expectations. Covariances are also expectations. So those depend on some distribution of X, right? If I wanted to compute that, I would basically need to know what the distribution of X is. Now, we're doing statistics, so I need to [INAUDIBLE] my question is going to be to say, well, how well can I estimate the covariance matrix itself, or some properties of this covariance matrix based on data? All right, so if I want to understand what my covariance matrix looks like based on data, I'm going to have to basically form its empirical counterparts, which I can do by doing the age-old statistical trick, which is replace your expectation by an average, all right? So let's just-- everything that's on the board, you see expectation, just replace it by an average. OK, so, now I'm going to be given X1, Xn. So, I'm going to define the empirical mean. OK so, really, the idea is take your expectation and replace it by 1 over n sum, right? And so the empirical mean is just 1 over n. Some of the Xi's-- I'm guessing everybody knows how to average vectors. It's just the average of the coordinates. So I will write this as X bar. And the empirical covariance matrix, often called sample covariance matrix, hence the notation, S. Well, this is my covariance matrix, right? Let's just replace the expectations by averages. 1 over n, sum from i equal 1 to n, of Xi, Xi transpose, minus-- this is the expectation of X. I will replace it by the average, which I just called X bar, X bar transpose, OK? And that's when I want to use the-- that's when I want to use the notation-- the second definition, but I could actually do exactly the same thing using this definition here. Sorry, using this definition right here. So this is actually 1 over n, sum from i equal 1 to n, of Xi minus X bar, Xi minus X bar transpose. And those are actually-- I mean, in a way, it looks like I could define two different estimators, but you can actually check. And I do encourage you to do this. If you're not comfortable making those manipulations, you can actually check that those two things are actually exactly the same, OK? So now, I'm going to want to talk about matrices, OK? And remember, we defined this big matrix, X, with the double bar. And the question is, can I express both X bar and the sample covariance matrix in terms of this big matrix, X? Because right now, it's still expressed in terms of the vectors. I'm summing those vectors, vectors transpose. The question is, can I just do that in a very compact way, in a way that I can actually remove this sum term, all right? That's going to be the goal. I mean, that's not a notational goal. That's really something that we want-- that's going to be convenient for us just like it was convenient to talk about matrices when we did linear regression. OK, X bar. We just said it's 1 over n, sum from I equal 1 to n of Xi, right? Now remember, what does this matrix look like? We said that X bar-- X is this guy. So if I look at X transpose, the columns of this guy becomes X1, my first observation, X2, my second observation, all the way to Xn, my last observation, right? Agreed? That's what X transpose is. So if I want to sum those guys, I can multiply by the all-ones vector. All right, so that's what the definition of the all-ones 1 vector is. Well, it's just a bunch of 1's in Rn, in this case. And so when I do X transpose 1, what I get is just the sum from i equal 1 to n of the Xi's. So if I divide by n, I get my average, OK? So here, I definitely removed the sum term. Let's see if with the covariance matrix, we can do the same. Well, and that's actually a little more difficult to see, I guess. But let's use this definition for S, OK? And one thing that's actually going to be-- so, let's see for one second, what-- so it's going to be something that involves X, multiplying X with itself, OK? And the question is, is it going to be multiplying X with X transpose, or X tranpose with X? To answer this question, you can go the easy route, which says, well, my covariance matrix is of size, what? What is the size of S? AUDIENCE: d by d. PHILIPPE RIGOLLET: d by d, OK? X is of size n by d. So if I do X times X transpose, I'm going to have something which is of size n by n. If I do X transpose X, I'm going to have something which is d by d. That's the easy route. And there's basically one of the two guys. You can actually open the box a little bit and see what's going on in there. If you do X transpose X, which we know gives you a d by d, you'll see that X is going to have vectors that are of the form, Xi, and X transpose is going to have vectors that are of the form, Xi transpose, right? And so, this is actually probably the right way to go. So let's look at what's X transpose X is giving us. So I claim that it's actually going to give us what we want, but rather than actually going there, let's-- to actually-- I mean, we could check it entry by entry, but there's actually a nice thing we can do. Before we go there, let's write X transpose as the following sum of variables, X1 and then just a bunch of 0's everywhere else. So it's still d by n. So n minus 1 of the columns are equal to 0 here. Then I'm going to put a 0 and then put X2. And then just a bunch of 0's, right? So that's just 0, 0 plus 0, 0, all the way to Xn, OK? Everybody agrees with it? See what I'm doing here? I'm just splitting it into a sum of matrices that only have one nonzero columns. But clearly, that's true. Now let's look at the product of this guy with itself. So, let's call these matrices M1, M2, Mn. So when I do X transpose X, what I do is the sum of the Mi's for i equal 1 to n, times the sum of the Mi transpose, right? Now, the sum of the Mi's transpose is just the sum of each of the Mi's transpose, OK? So now I just have this product of two sums, so I'm just going to re-index the second one by j. So this is sum for i equal 1 to n, j equal 1 to n of Mi Mj transpose. OK? And now what we want to notice is that if i is different from j, what's happening? Well if i is different from j, let's look at say, M1 times XM2 transpose. So what is the product between those two matrices? AUDIENCE: It's a new entry and [INAUDIBLE] PHILIPPE RIGOLLET: There's an entry? AUDIENCE: Well, it's an entry. It's like a dot product in that form next to [? transpose. ?] PHILIPPE RIGOLLET: You mean a dot product is just getting [INAUDIBLE] number, right? So I want-- this is going to be a matrix. It's the product of two matrices, right? This is a matrix times a matrix. So this should be a matrix, right, of size d by d. Yeah, I should see a lot of hands that look like this, right? Because look at this. So let's multiply the first-- let's look at what's going on in the first column here. I'm multiplying this column with each of those rows. The only nonzero coefficient is here, and it only hits this column of 0's. So every time, this is going to give you 0, 0, 0, 0. And it's going to be the same for every single one of them. So this matrix is just full of 0's, right? They never hit each other when I do the matrix-matrix multiplication. There's no-- every non-zero hits a 0. So what it means is-- and this, of course, you can check for every i different from j. So this means that Mi times Mj transpose is actually equal to 0 when i is different from j, Right? Everybody is OK with this? So what that means is that when I do this double sum, really, it's a simple sum. There's only just the sum from i equal 1 to n of Mi Mi transpose. Because this is the only terms in this double sum that are not going to be 0 when [INAUDIBLE] [? M1 ?] with M1 itself. Now, let's see what's going on when I do M1 times M1 transpose. Well, now, if I do Mi times and Mi transpose, now this guy becomes [? X1 ?] [INAUDIBLE] it's here. And so now, I really have X1 times X1 transpose. So this is really just the sum from i equal 1 to n of Xi Xi transpose, just because Mi Mi transpose is Xi Xi transpose. There's nothing else there. So that's the good news, right? This term here is really just X transpose X divided by n. OK, I can use that guy again, I guess. Well, no. Let's just-- OK, so let me rewrite S. All right, that's the definition we have. And we know that this guy already is equal to 1 over n X transpose X. x bar x bar transpose-- we know that x bar-- we just proved that x bar-- sorry, little x bar was equal to 1 over n X bar transpose times the all-ones vector. So I'm just going to do that. So that's just going to be minus. I'm going to pull my two 1 over n's-- one from this guy, one from this guy. So I'm going to get 1 over n squared. And then I'm going to get X bar-- sorry, there's no X bar here. It's just X. Yeah. X transpose all ones times X transpose all ones transpose, right? And X transpose all ones transpose-- right, the rule-- if I have A times B transpose, it's B transpose times A transpose, right? That's just the rule of transposition. So this is 1 transpose X transpose. And so when I put all these guys together, this is actually equal to 1 over n X transpose X minus one over n squared X transpose 1, 1 transpose X. Because X transpose transposes X, OK? So now, I can actually-- I have something which is of the form, X transpose X-- [INAUDIBLE] to the left, X transpose; to the right, X. Here, I have X transpose to the left, X to the right. So it can factor out whatever's in there. So I can write S as 1 over n-- sorry, X transpose times 1 over n times the identity of Rd. And then I have minus 1 over n, 1, 1 transpose X. OK, because if you-- I mean, you can distribute it back, right? So here, I'm going to get what? X transpose identity times X, the whole thing divided by n. That's this term. And then the second one is going to be-- sorry, 1 over n squared. And then I'm going to get 1 over n squared times X transpose 1, 1 transpose which is this guy, times X, and that's the [? right ?] [? thing, ?] OK? So, the way it's written, I factored out one of the 1 over n's. So I'm just going to do the same thing as on this slide. So I'm just factoring out this 1 over n here. So it's 1 over n times X transpose identity of our d divided by n divided by 1 this time, minus 1 over n 1, 1 transpose times X, OK? So that's just what's on the slides. What does the matrix, 1, 1 transpose, look like? AUDIENCE: All 1's. PHILIPPE RIGOLLET: It's just all 1's, right? Because the entries are the products of the all-ones-- of the coordinates of the all-ones vectors with the coordinates of the all-ones vectors, so I only get 1's. So it's a d by d matrix with only 1's. So this matrix, I can actually write exactly, right? H, this matrix that I called H which is what's sandwiched in-between this X transpose and X. By definition, I said this is the definition of H. Then this thing, I can write its coordinates exactly. We know it's identity divided by n minus-- sorry, I don't know why I keep [INAUDIBLE].. Minus 1 over n 1, 1 transpose-- so it's this matrix with the only 1's on the diagonals and 0's and elsewhere-- minus a matrix that only has 1 over n everywhere. OK, so the whole thing is 1 minus 1 over n on the diagonals and then minus 1 over n here, OK? And now I claim that this matrix is an orthogonal projector. Now, I'm writing this, but it's completely useless. This is just a way for you to see that it's actually very convenient now to think about this problem as being a matrix problem, because things are much nicer when you think about the actual form of your matrices, right? They could tell you, here is the matrix. I mean, imagine you're sitting at a midterm, and I say, here's the matrix that has 1 minus 1 over n on the diagonals and minus 1 over n on the [INAUDIBLE] diagonal. Prove to me that it's a projector matrix. You're going to have to basically take this guy times itself. It's going to be really complicated, right? So we know it's symmetric. That's for sure. But the fact that it has this particular way of writing it is going to make my life super easy to check this. That's the definition of a projector. It has to be symmetric and it has to square to itself because we just said in the chapter on linear regression that once you project, if you apply the projection again, you're not moving because you're already there. OK, so why is H squared equal to H? Well let's just write H square. It's the identity minus 1 over n 1, 1 transpose times the identity minus 1 over n 1, 1 transpose, right? Let's just expand this now. This is equal to the identity minus-- well, the identity times 1, 1 transpose is just the identity. So it's 1, 1 transpose, sorry. So 1 over n 1, 1 transpose minus 1 over n 1, 1 transpose. And then there's going to be what makes the deal is that I get this 1 over n squared this time. And then I get the product of 1 over n trans-- oh, let's write it completely. I get 1, 1 transpose times 1, 1 transpose, OK? But this thing here-- what is this? n, right, is the end product of the all-ones vector with the all-ones vector. So I'm just summing n times 1 squared, which is n. So this is equal to n. So I pull it out, cancel one of the ends, and I'm back to what I had before. So I had identity minus 2 over n 1, 1 transpose plus 1 over n 1, 1 transpose which is equal to H. Because one of the 1 over n's cancel, OK? So it's a projection matrix. It's projecting onto some linear space, right? It's taking a matrix. Sorry, it's taking a vector and it's projecting onto a certain space of vectors. What is this space? Right, so, how do you-- so I'm only asking the answer to this question in words, right? So how would you describe the vectors onto which this matrix is projecting? Well, if you want to answer this question, the way you would tackle it is first by saying, OK, what does a vector which is of the form, H times something, look like, right? What can I say about this vector that's going to be definitely giving me something about the space on which it projects? I need to know a little more to know that it projects exactly onto this. But one way we can do this is just see how it acts on a vector. What does it do to a vector to apply H, right? So I take v. And let's see what taking v and applying H to it looks like. Well, it's the identity minus something. So it takes v and it removes something from v. What does it remove? Well, it's 1 over n times v transpose 1 times the all-ones vector, right? Agreed? I just wrote v transpose 1 instead of 1 transpose v, which are the same thing. What is this thing? What should I call it in mathematical notation? v bar, right? I should all it v bar because this is exactly the average of the entries of v, agreed? This is summing the entries of v's, and this is dividing by the number of those v's. Sorry, now v is in our-- sorry, why do I divide by-- I'm just-- OK, I need to check what my dimensions are now. No, it's in Rd, right? So why do I divide by n? So it's not really v bar. It's the sum of the v's divided by-- right, so it's v bar. AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: Yeah, v has to be [INAUDIBLE] PHILIPPE RIGOLLET: Oh, yeah. OK, thank you. So everywhere I wrote Hd, that was actually Hn. Oh, man. I wish I had a computer now. All right. So-- yeah, because the-- yeah, right? So why it's not-- well, why I thought it was this is because I was thinking about the outer dimension of X, really of X transpose, which is really the inner dimension, didn't matter to me, right? So the thing that I can sandwich between X transpose and X has to be n by n. So this was actually n by n. And so that's actually n by n. Everything is n by n. Sorry about that. So this is n. This is n. This is-- well, I didn't really tell you what the all-ones vector was, but it's also in our n. Yeah, OK. Thank you. And n-- actually, I used the fact that this was of size n here already. OK, and so that's indeed v bar. So what is this projection doing to a vector? It's removing its average on each coordinate, right? And the effect of this is that v is a vector. What is the average of Hv? AUDIENCE: 0. PHILIPPE RIGOLLET: Right, so it's 0. It's the average of v, which is v bar, minus the average of something that only has v bar's entry, which is v bar. So this thing is actually 0. So let me repeat my question. Onto what subspace does H project? Onto the subspace of vectors that have mean 0. A vector that has mean 0 is a vector. So if you want to talk more linear algebra, v bar-- for a vector you have mean 0, it means that v is orthogonal to the span of the all-ones vector. That's it. It projects to this space. So in words, it projects onto the space of vectors that have 0 mean. In linear algebra, it says it projects onto the hyperplane which is orthogonal to the all-ones vector, OK? So that's all. Can you guys still see the screen? Are you good over there? OK. All right, so now, what it means is that, well, I'm doing this weird thing, right? I'm taking the inner product-- so S is taking X. And then it's removing its mean of each of the columns of X, right? When I take H times X, I'm basically applying this projection which consists in removing the mean of all the X's. And then I multiply by H transpose. But what's actually nice is that, remember, H is a projector. Sorry, I don't want to keep that. Which means that when I look at X transpose HX, it's the same as looking at X transpose H squared X. But since H is equal to its transpose, this is actually the same as looking at X transpose H transpose HX, which is the same as looking at HX transpose HX, OK? So what it's doing, it's first applying this projection matrix, H, which removes the mean of each of your columns, and then looks at the inner products between those guys, right? Each entry of this guy is just the covariance between those centered things. That's all it's doing. All right, so those are actually going to be the key statements. So everything we've done so far is really mainly linear algebra, right? I mean, looking at expectations and covariances was just-- we just used the fact that the expectation was linear. We didn't do much. But now there's a nice thing that's happening. And that's why we're going to switch from the language of linear algebra to more statistical, because what's happening is that if I look at this quadratic form, right? So I take sigma. So I take a vector, u. And I'm going to look at u-- so let's say, in Rd. And I'm going to look at u transpose sigma u. OK? What is this doing? Well, we know that u transpose sigma u is equal to what? Well, sigma is the expectation of XX transpose minus the expectation of X expectation of X transpose, right? So I just substitute in there. Now, u is deterministic. So in particular, I can push it inside the expectation here, agreed? And I can do the same from the right. So here, when I push u transpose here, and u here, what I'm left with is the expectation of u transpose X times X transpose u. OK? And now, I can do the same thing for this guy. And this tells me that this is the expectation of u transpose X times the expectation of X transpose u. Of course, u transpose X is equal to X transpose u. And u-- yeah. So what it means is that this is actually equal to the expectation of u transpose X squared minus the expectation of u transpose X, the whole thing squared. But this is something that should look familiar. This is really just the variance of this particular random variable which is of the form, u transpose X, right? u transpose X is a number. It involves a random vector, so it's a random variable. And so it has a variance. And this variance is exactly given by this formula. So this is just the variance of u transpose X. So what we've proved is that if I look at this guy, this is really just the variance of u transpose X, OK? I can do the same thing for the sample variance. So let's do this. And as you can see, spoiler alert, this is going to be the sample variance. OK, so remember, S is 1 over n, sum of Xi Xi transpose minus X bar X bar transpose. So when I do u transpose, Su, what it gives me is 1 over n sum from i equal 1 to n of u transpose Xi times Xi transpose u, all right? So those are two numbers that multiply each other and that happen to be equal to each other, minus u transpose X bar X bar transpose u, which is also the product of two numbers that happen to be equal to each other. So I can rewrite this with squares. So we're almost there. All I need to know to check is that this thing is actually the average of those guys, right? So u transpose X bar. What is it? It's 1 over n sum from i equal 1 to n of u transpose Xi. So it's really something that I can write as u transpose X bar, right? That's the average of those random variables of the form, u transpose Xi. So what it means is that u transpose Su, I can write as 1 over n sum from i equal 1 to n of u transpose Xi squared minus u transpose X bar squared, which is the empirical variance that we need noted by small s squared, right? So that's the empirical variance of u transpose X1 all the way to u transpose Xn. OK, and here, same thing. I use exactly the same thing. I just use the fact that here, the only thing I use is really the linearity of this guy, of 1 over n sum or the linearity of expectation, that I can push things in there, OK? AUDIENCE: So what you have written at the end of that sum for uT Su? PHILIPPE RIGOLLET: This one? AUDIENCE: Yeah. PHILIPPE RIGOLLET: Yeah, I said it's equal to small s, and I want to make a difference between the big S that I'm using here. So this is equal to small-- I don't know, I'm trying to make it look like a calligraphic s squared. OK, so this is nice, right? This covariance matrix-- so let's look at capital sigma itself right now. This covariance matrix, we know that if we read its entries, what we get is the covariance between the coordinates of the X's, right, of the random vector, X. And the coordinates, well, by definition, are attached to a coordinate system. So I only know what the covariance of X in of those two things are, or the covariance of those two things are. But what if I want to find coordinates between linear combination of the X's? Sorry, if I want to find covariances between linear combination of those X's. And that's exactly what this allows me to do. It says, well, if I pre- and post-multiply by u, this is actually telling me what the variance of X along direction u is, OK? So there's a lot of information in there, and it's just really exploiting the fact that there is some linearity going on in the covariance. So, why variance? Why is variance interesting for us, right? Why? I started by saying, here, we're going to be interested in having something to do dimension reduction. We have-- think of your points as [? being in a ?] dimension larger than 4, and we're going to try to reduce the dimension. So let's just think for one second, what do we want about a dimension reduction procedure? If I have all my points that live in, say, three dimensions, and I have one point here and one point here and one point here and one point here and one point here, and I decide to project them onto some plane-- that I take a plane that's just like this, what's going to happen is that those points are all going to project to the same point, right? I'm just going to not see anything. However, if I take a plane which is like this, they're all going to project into some nice line. Maybe I can even project them onto a line and they will still be far apart from each other. So that's what you want. You want to be able to say, when I take my points and I say I project them onto lower dimensions, I do not want them to collapse into one single point. I want them to be spread as possible in the direction on which I project. And this is what we're going to try to do. And of course, measuring spread between points can be done in many ways, right? I mean, you could look at, I don't know, sum of pairwise distances between those guys. You could look at some sort of energy. You can look at many ways to measure of spread in a direction. But variance is a good way to measure of spread between points. If you have a lot of variance between your points, then chances are they're going to be spread. Now, this is not always the case, right? If I have a direction in which all my points are clumped onto one big point and one other big point, it's going to choose this because that's the direction that has a lot of variance. But hopefully, the variance is going to spread things out nicely. So the idea of principal component analysis is going to try to identify those variances-- those directions along which we have a lot of variance. Reciprocally, we're going to try to eliminate the directions along which we do not have a lot of variance, OK? And let's see why. Well, if-- so here's the first claim. If you transpose Su is equal to 0, what's happening? Well, I know that an empirical variance is equal to 0. What does it mean for an empirical variance to be equal to 0? So I give you a bunch of points, right? So those points are those points-- u transpose X1, u transpose-- those are a bunch of numbers. What does it mean to have the empirical variance of those points being equal to 0? AUDIENCE: They're all the same. PHILIPPE RIGOLLET: They're all the same. So what it means is that when I have my points, right? So, can you find a direction for those points in which they project to all the same point? No, right? There's no such thing. For this to happen, you have to have your points which are perfectly aligned. And then when you're going to project onto the orthogonal of this guy, they're going to all project to the same point here, which means that the empirical variance is going to be 0. Now, this is an extreme case. This will never happen in practice, because if that happens, well, I mean, you can basically figure that out very quickly. So in the same way, it's very unlikely that you're going to have u transpose sigma u, which is equal to 0, which means that, essentially, all your points are [INAUDIBLE] or let's say all of them are orthogonal to u, right? So it's exactly the same thing. It just says that in the population case, there's no probability that your points deviate from this guy here. This happens with zero probability, OK? And that's just because if you look at the variance of this guy, it's going to be 0. And then that means that there's no deviation. By the way, I'm using the name projection when I talk about u transpose X, right? So let's just be clear about this. If you-- so let's say I have a bunch of points, and u is a vector in this direction. And let's say that u has the-- so this is 0. This is u. And let's say that u has norm, 1, OK? When I look, what is the coordinate of the projection? So what is the length of this guy here? Let's call this guy X1. What is the length of this guy? In terms of inner products? This is exactly u transpose X1. This length here, if this is X2, this is exactly u transpose X2, OK? So those-- u transpose X measure exactly the distance to the origin of those-- I mean, it's really-- think of it as being just an x-axis thing. You just have a bunch of points. You have an origin. And it's really just telling you what the coordinate on this axis is going to be, right? So in particular, if the empirical variance is 0, it means that all these points project to the same point, which means that they have to be orthogonal to this guy. And you can think of it as being also maybe an entire plane that's orthogonal to this line, OK? So that's why I talk about projection, because the inner products, u transpose X, is really measuring the coordinates of X when u becomes the x-axis. Now, if u does not have norm 1, then you just have a change of scale here. You just have a change of unit, right? So this is really u times X1. The coordinates should really be divided by the norm of u. OK, so now, just in the same way-- so we're never going to have exactly 0. But if we [INAUDIBLE] the other end, if u transpose Su is large, what does it mean? It means that when I look at my points as projected onto the axis generated by u, they're going to have a lot of variance. They're going to be far away from each other in average, right? That's what large variance means, or at least large empirical variance means. And same thing for u. So what we're going to try to find is a u that maximizes this. If I can find a u that maximizes this so I can look in every direction, and suddenly I find a direction in which the spread is massive, then that's a point on which I'm basically the less likely to have my points project onto each other and collide, right? At least I know they're going to project at least onto two points. So the idea now is to say, OK, let's try to maximize this spread, right? So we're going to try to find the maximum over all u's of u transpose Su. And that's going to be the direction that maximizes the empirical variance. Now of course, if I read it like that for all u's in Rd, what is the value of this maximum? It's infinity, right? Because I can always multiply u by 10, and this entire thing is going to multiplied by 100. So I'm just going to take u as large as I want, and this thing is going to be as large as I want, and so I need to constrain u. And as I said, I need to have u of size 1 to talk about coordinates in the system generated by u like this. So I'm just going to constrain u to have Euclidean norm equal to 1, OK? So that's going to be my goal-- trying to find the largest possible u transpose Su, or in other words, empirical variance of the points projected onto the direction u when u is of norm 1, which justifies to use the word, "direction," and because there's no magnitude to this u. OK, so how am I going to do this? I could just fold and say, let's just optimize this thing, right? Let's just take this problem. It says maximize a function onto some constraints. Immediately, the constraint is sort of nasty. I'm on a sphere, and I'm trying to move points on the sphere. And I'm maximizing this thing which actually happens to be convex. And we know we know how to minimize convex functions, but maximize them is a different question. And so this problem might be super hard. So I can just say, OK, here's what I want to do, and let me give that to an optimizer and just hope that the optimizer can solve this problem for me. That's one thing we can do. Now as you can imagine, PCA is so well spread, right? Principal component analysis is something that people do constantly. And so that means that we know how to do this fast. So that's one thing. The other thing that you should probably question about why-- if this thing is actually difficult, why in the world would you even choose the variance as a measure of spread if there's so many measures of spread, right? The variance is one measure of spread. It's not guaranteed that everything is going to project nicely far apart from each other. So we could choose the variance, but we could choose something else. If the variance does not help, why choose it? Turns out the variance helps. So this is indeed a non-convex problem. I'm maximizing, so it's actually the same. I can make this constraint convex because I'm maximizing a convex function, so it's clear that the maximum is going to be attained at the boundary. So I can actually just fill this ball into some convex ball. However, I'm still maximizing, so this is a non-convex problem. And this turns out to be the fanciest non-convex problem we know how to solve. And the reason why we know how to solve it is not because of optimization or using gradient-type things or anything of the algorithms that I mentioned during the maximum likelihood. It's because of linear algebra. Linear algebra guarantees that we know how to solve this. And to understand this, we need to go a little deeper in linear algebra, and we need to understand the concept of diagonalization of a matrix. So who has ever seen the concept of an eigenvalue? Oh, that's beautiful. And if you're not raising your hand, you're just playing "Candy Crush," right? All right, so, OK. This is great. Everybody's seen it. For my live audience of millions, maybe you have not, so I will still go through it. All right, so one of the basic facts-- and I remember when I learned this in-- I mean, when I was an undergrad, I learned about the spectral decomposition and this diagonalization of matrices. And for me, it was just a structural property of matrices, but it turns out that it's extremely useful, and it's useful for algorithmic purposes. And so what this theorem tells you is that if you take a symmetric matrix-- well, with real entries, but that really does not matter so much. And here, I'm going to actually-- so I take a symmetric matrix, and actually S and sigma are two such symmetric matrices, right? Then there exists P and D, which are both-- so let's say d by d. Which are both d by d such that P is orthogonal. That means that P transpose P is equal to PP transpose is equal to the identity. And D is diagonal. And sigma, let's say, is equal to PDP transpose, OK? So it's a diagonalization because it's finding a nice transformation. P has some nice properties. It's really just the change of coordinates in which your matrix is diagonal, right? And the way you want to see this-- and I think it sort of helps to think about this problem as being-- sigma being a covariance matrix. What does a covariance matrix tell you? Think of a multivariate Gaussian. Can everybody visualize a three-dimensional Gaussian density? Right, so it's going to be some sort of a bell-shaped curve, but it might be more elongated in one direction than another. And then going to chop it like that, all right? So I'm going to chop it off. And I'm going to look at how it bleeds, all right? So I'm just going to look at where the blood is. And what it's going to look at-- it's going to look like some sort of ellipsoid, right? In high dimension, it's just going to be an olive. And that is just going to be bigger and bigger. And then I chop it off a little lower, and I get something a little bigger like this. And so it turns out that sigma is capturing exactly this, right? The matrix sigma-- so the center of your covariance matrix of your Gaussian is going to be this thing. And sigma is going to tell you which direction it's elongated. And so in particular, if you look, if you knew an ellipse, you know there's something called principal axis, right? So you could actually define something that looks like this, which is this axis, the one along which it's the most elongated. Then the axis along which is orthogonal to it, along which it's slightly less elongated, and you go again and again along the orthogonal ones. It turns out that those things here is the new coordinate system in which this transformation, P and P transpose, is putting you into. And D has entries on the diagonal which are exactly this length and this length, right? So that's just what it's doing. It's just telling you, well, if you think of having this Gaussian or this high-dimensional ellipsoid, it's elongated along certain directions. And these directions are actually maybe not well aligned with your original coordinate system, which might just be the usual one, right-- north, south, and east, west. Maybe I need to turn it. And that's exactly what this orthogonal transformation is doing for you, all right? So, in a way, this is actually telling you even more. It's telling you that any matrix that's symmetric, you can actually turn it somewhere. And that'll start to dilate things in the directions that you have, and then turn it back to what you originally had. And that's actually exactly the effect of applying a symmetric matrix through a vector, right? And it's pretty impressive. It says if I take sigma times v. Any sigma that's of this form, what I'm doing is-- that's symmetric. What I'm really doing to v is I'm changing its coordinate system, so I'm rotating it. Then I'm changing-- I'm multiplying its coordinates, and then I'm rotating it back. That's all it's doing, and that's what all symmetric matrices do, which means that this is doing a lot. All right, so OK. So, what do I know? So I'm not going to prove that this is the so-called spectral theorem. And the diagonal entries of D is of the form, lambda 1, lambda 2, lambda d, 0, 0. And the lambda j's are called eigenvalues of D. Now in general, those numbers can be positive, negative, or equal to 0. But here, I know that sigma and S are-- well, they're symmetric for sure, but they are positive semidefinite. What does it mean? It means that when I take u transpose sigma u for example, this number is always non-negative. Why is this true? What is this number? It's the variance of-- and actually, I don't even need to finish this sentence. As soon as I say that this is a variance, well, it has to be non-negative. We know that a variance is not negative. And so, that's also a nice way you can use that. So it's just to say, well, OK, this thing is positive semidefinite because it's a covariance matrix. So I know it's a variance, OK? So I get this. Now, if I had some negative numbers-- so the effect of that is that when I draw this picture, those axes are always positive, which is kind of a weird thing to say. But what it means is that when I take a vector, v, I rotate it, and then I stretch it in the directions of the coordinate, I cannot flip it. I can only stretch or shrink, but I cannot flip its sign, all right? But in general, for any symmetric matrices, I could do this. But when it's positive symmetric definite, actually what turns out is that all the lambda j's are non-negative. I cannot flip it, OK? So all the eigenvalues are non-negative. That's a property of positive semidef. So when it's symmetric, you have the eigenvalues. They can be any number. And when it's positive semidefinite, in particular that's the case of the covariance matrix and the empirical covariance matrix, right? Because the empirical covariance matrix is an empirical variance, which itself is non-negative. And so I get that the eigenvalues are non-negative. All right, so principal component analysis is saying, OK, I want to find the direction, u, that maximizes u transpose Su, all right? I've just introduced in one slide something about eigenvalues. So hopefully, they should help. So what is it that I'm going to be getting? Well, let's just see what happens. Oh, I forgot to mention that-- and I will use this. So the lambda j's are called eigenvectors. And then the matrix, P, has columns v1 to vd, OK? The fact that it's orthogonal-- that P transpose P is equal to the identity-- means that those guys satisfied that vi transpose vj is equal to 0 if i is different from j. And vi transpose vi is actually equal to 1, right, because the entries of PP transpose are exactly going to be of the form, vi transpose vj, OK? So those v's are called eigenvectors. And v1 is attached to lambda 1, and v2 is attached to lambda 2, OK? So let's see what's happening with those things. What happens if I take sigma-- so if you know eigenvalues, you know exactly what's going to happen. If I look at, say, sigma times v1, well, what is sigma? We know that sigma is PDP transpose v1. What is P transpose times v1? Well, P transpose has rows v1 transpose, v2 transpose, all the way to vd transpose. So when I multiply this by v1, what I'm left with is the first coordinate is going to be equal to 1 and the second coordinate is going to be equal to 0, right? Because they're orthogonal to each other-- 0 all the way to the end. So that's when I do P transpose v1. Now I multiply by D. Well, I'm just multiplying this guy by lambda 1, this guy by lambda 2, and this guy by lambda d, so this is really just lambda 1. And now I need to post-multiply by P. So what is P times this guy? Well, P is v1 all the way to vd. And now I multiply by a vector that only has 0's except lambda 1 on the first guy. So this is just lambda 1 times v1. So what we've proved is that sigma times v1 is lambda 1 v1, and that's probably the notion of eigenvalue you're most comfortable with, right? So just when I multiply by v1, I get v1 back multiplied by something, which is the eigenvalue. So in particular, if I look at v1, transpose sigma v1, what do I get? Well, I get lambda 1 v1 transpose v1, which is 1, right? So this is actually lambda 1 v1 transpose v1, which is lambda 1, OK? And if I do the same with v2, clearly I'm going to get v2 transpose sigma. v2 is equal to lambda 2. So for each of the vj's, I know that if I look at the variance along the vj, it's actually exactly given by those eigenvalues, all right? Which proves this, because the variance along the eigenvectors is actually equal to the eigenvalues. So since they're variances, they have to be non-negative. So now, I'm looking for the one direction that has the most variance, right? But that's not only among the eigenvectors. That's also among the other directions that are in-between the eigenvectors. If I were to look only at the eigenvectors, it would just tell me, well, just pick the eigenvector, vj, that's associated to the largest of the lambda j's. But it turns out that that's also true for any vector-- that the maximum direction is actually one direction which is among the eigenvectors. And among the eigenvectors, we know that the one that's the largest-- that carries the largest variance is the one that's associated to the largest eigenvalue, all right? And so this is what PCA is going to try to do for me. So in practice, that's what I mentioned already, right? We're trying to project the point cloud onto a low-dimensional space, D prime, by keeping as much information as possible. And by "as much information," I mean we do not want points to collide. And so what PCA is going to do is just going to try to project [? on two ?] directions. So there's going to be a u, and then there's going to be something orthogonal to u, and then the third one, et cetera, so that once we project on those, we're keeping as much of the covariance as possible, OK? And in particular, those directions that we're going to pick are actually a subset of the vj's that are associated to the largest eigenvalues. So I'm going to stop here for today. We'll finish this on Tuesday. But basically, the idea is it's just the following. You're just going to-- well, let me skip one more. Yeah, this is the idea. You're first going to pick the eigenvector associated to the largest eigenvalue. Then you're going to pick the direction that orthogonal to the vector that you've picked, and that's carrying the most variance. And that's actually the second largest-- the eigenvector associated to the second largest eigenvalue. And you're going to go all the way to the number of them that you actually want to pick, which is in this case, d, OK? And wherever you choose to chop this process, not going all the way to d, is going to actually give you a lower-dimensional representation in the coordinate system that's given by v1, v2, v3, et cetera, OK? So we'll see that in more details on Tuesday. But I don't want to get into it now. We don't have enough time. Are there any questions?
MIT_18650_Statistics_for_Applications_Fall_2016
20_Principal_Component_Analysis_cont.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIPPE RIGOLLET: We keep on talking about principal component analysis, which we essentially introduced as a way to work with a bunch of data. So the data that's given to us when we want to do PCA is a bunch of vectors X1 to Xn. So they are random vectors. in Rd. And what we mentioned is that we're going to be using linear algebra-- in particular, the spectral theorem-- that guarantees to us that if I look at the convenience matrix of this guy, or its empirical covariance matrix, since they're symmetric real matrices and they are positive semidefinite, there exists a diagonalization into non-negative eigenvalues. And so here, those things live in Rd, so it's a really large space. And what we want to do is to map it down into a space that we can visualize, hopefully a space of size 2 or 3. Or if not, then we're just going to take more and start looking at subspaces altogether. So think of the case where d is large but not larger than n. So let's say, you have a large number of points. The question is, is it possible to project those things onto a lower dimensional space, d prime, which is much less than d-- so think of d prime equals, say, 2 or 3-- and so that you keep as much information about the cloud of points that you had originally. So again, the example that we could have is that X1 to Xn for, say, Xi for patient i's recording a bunch of body measurements and maybe blood pressure, some symptoms, et cetera. And then we have a cloud of n patients. And we're trying to visualize maybe to see if-- If I could see, for example, that there's two groups of patients, maybe I would know that I have two groups of different disease or maybe two groups of different patients that respond differently to a particular disease or drug et cetera. So visualizing is going to give us quite a bit of insight about what the spatial arrangement of those vectors are. And so PCA says, well, here, of course, in this question, one thing that's not defined is what is information. And we said that one thing we might want to do when we project is that points do not collide with each other. And so that means we're trying to find directions, where after I project, the points are still pretty spread out. And so I can see what's going on. And PCA says-- OK, so there's many ways to answer this question. And PCA says, let's just find a subspace of dimension d prime that keeps as much covariance structure as possible. And the reason is that those directions are the ones that maximize the variance, which is a proxy for the spread. There's many, many ways to do this. There's actually a Google video that was released maybe last week about the data visualization team of Google that shows you something called t-SNE, which is essentially something that tries to do that. It takes points in very high dimensions and tries to map them in lower dimensions, so that you can actually visualize them. And t-SNE is some alternative to PCA that gives an other definition for the word information. I'll talk about this towards the end, how you can actually somewhat automatically extend everything we've said for PCA to an infinite family of procedures. So how do we do this? Well, the way we do this is as follows. So remember, given those guys, we can form something which is called S, which is the sample, or the empirical covariance matrix. And since from couple of slides ago, we know that S has a eigenvalue decomposition, S is equal to PDP transpose, where P is orthogonal. So that's where we use our linear algebra results. So that means that P transpose P is equal to PP transpose, which is the identity. So remember, S is a d by d matrix. And so P is also d by d. And d is diagonal. And I'm actually going to take, without loss of generality, I'm going to assume that d-- so it's going to be diagonal-- and I'm going to have something that looks like lambda 1 to lambda d. Those are called the eigenvalues of S. What we know is that lambda j's are non-negative. And actually, what I'm going to assume without loss of generalities is lambda 1 is larger than lambda 2, which is larger than lambda d. Because in particular, this decomposition-- the spectrum decomposition-- is not entirely unique. I could permute the columns of P, and I would still have an orthogonal matrix. And to balance that, I would also have to permute the entries of d. So there's as many decompositions as there are permutations. So there's actually quite a bit. But the bag of eigenvalues is unique. The set of eigenvalues is unique. The ordering is certainly not unique. So here, I'm just going to pick-- I'm going to nail down one particular permutation-- actually, maybe two in case I have equalities. But let's say, I pick one that satisfies this. And the reason why I do this is really not very important. It's just to say, I'm going to want to talk about the largest of those eigenvalues. So this is just going to be easier for me to say that this one is lambda 1, rather than say it's lambda 7. So this is just to say that the largest eigenvalue of S is lambda 1. If I didn't do that, I would just call it maybe lambda max, and you would just know which one I'm talking about. So what's happening now is that if I look at d, then it turns out that if I start-- so if I do P transpose Xi, I am actually projecting my Xi's-- I'm basically changing the basis for my Xi's. And now, D is the empirical covariance matrix of those guys. So let's check that. So what it means is that if I look at-- so what I claim is that P transpose Xi-- that's a new vector, let's call it Yi, it's also in Rd-- and what I claim is that the covariance matrix of this guy is actually now this diagonal matrix, which means in particular that if they were Gaussian, then they would be independent. But I also know now that there's no correlation across coordinates of Yi. So to prove this, let me assume that X bar is equal to 0. And the reason why I do this is because it's just annoying to carry out all this censuring constantly and I talk about S. So when X bar is equal to 0, that implies that S has a very simple form. It's of the form sum from i equal 1 to n of Xi Xi transpose. So that's my S. But what I want is the S of Y-- So OK, that implies also that P times X bar, which is equal to P times X bar is also equal to 0. So that means that Y bar-- Y has mean 0, if this is 0. So if I look at the sample covariance matrix of Y, it's just going to be something that looks like the sum of the outer products or the Yi Yi transpose. And again, the reason why I make this assumption is so that I don't have to write minus X bar X bar transpose. But you can do it. And it's going to work exactly the same. So now, I look at this S prime. And so what is this S prime? Well, I'm just going to replace Yi with PXi. So it's the sum from i equal 1 to n of PXi PXi transpose, which is equal to the sum from-- sorry there's a 1/n. So it's equal to 1/n sum from i equal 1 to n of PXi Xi transpose P transpose. Agree? I just said that the transpose of AB is the transpose of B times the transpose of A. And so now, I can push the sum in. P does not depend on i. So this thing here is equal to PS P transpose, because the sum of the Xi Xi transpose divided by n is S. But what is PS P transpose? Well, we know that S is equal to-- sorry that's P transpose. So this was with a P transpose. I'm sorry, I made an important mistake here. So Yi is P transpose Xi. So this is P transpose and P transpose here, which means that this is P transpose and this is double transpose, which is just nothing and that transpose and nothing. So now, I write S as PD P transpose. That's the spectral decomposition that I had before. That's my eigenvalue decomposition, which means that now, if I look at S prime, it's P transpose times PD P transpose P. But now, P transpose P is the identity, P transpose P is the identity. So this is actually just equal to D. And again, you can check that this also works if you have to center all those guys as you go. But if you think about it, this is the same thing as saying that I just replaced Xi by Xi minus X bar. And then it's true that Y bar is also P times Xi minus X bar. So now, we have that D is the empirical covariance matrix of those guys-- the Yi's, which are P transpose Xi's. And so in particular, what it means is that if I look at the covariance of Yj Yk-- So that's the covariance of the j-th coordinate of Y and the k-th coordinate of Y. I'm just not putting an index. But maybe, let's say the first one or something like this-- any of them, their IID. Then what is this covariance? It's actually 0 if j is different from k. And the covariance between Yj and Yj, which is just the variance of Yj, is equal to lambda j-- the j-th largest eigenvalue. So the eigenvalues capture the variance of my observations in this new coordinate system. And they're completely orthogonal. So what does that mean? Well, again, remember, if I chop off the head of my Gaussian in multi dimensions, we said that what we started from was something that looked like this. And we said, well, there's one direction that's important, that's this guy, and one important that's this guy. When I applied a transformation P transpose, what I'm doing is that I'm realigning this thing with the new axes. Or in a way, rather to be fair, I'm not actually realigning the ellipses with the axes. I'm really realigning the axes with the ellipses. So really, what I'm doing is I'm saying, after I apply P, I'm just rotating this coordinate system. So now, it becomes this guy. And now, my ellipses actually completely align. And what happens here is that this coordinate is independent of that coordinate. And that's what we write here, if they are Gaussian. I didn't really tell this-- I'm only making statements about covariances. If they are Gaussians, those implied statements about independence. So as I said, the variance now, lambda 1, is actually the variance of P transpose Xi. But if I look now at the-- so this is a vector, so I need to look at the first coordinate of this guy. So it turns out that doing this is actually the same thing as looking at the variance of what? Well, the first column of P times Xi. So that's the variance of-- I'm going to call it v1 transpose Xi, where P-- So the v1 vd in Rd are eigenvectors. And each vi is associated to lambda i. So that's what we saw when we talked about this eigen decomposition a couple of slides back. That's the one here. So if I call the columns of P v1 to vd, this is what's happening. So when I look at lambda 1, it's just the variance of Xi inner product with v1. And we made this picture when we said, well, let's say v1 is here and then x1 is here. And if vi has a unique norm, then the inner product between Xi and v1 is just the length of this guy here. So that's the variance of the Xi says the length of Xi-- so this is 0-- that's the length of Xi when I project it onto the direction that span by v1. If v1 has length 2, this is really just twice this length. If vi has length 3, it's three times this. But it turns out that since P satisfies P transpose P is equal to the identity-- that's an orthogonal matrix, that's right here-- then this is actually saying the same thing as vj transpose vj, which is really the norm squared of vj, is equal to 1. And vj transpose vk is equal to 0, if j is different from k. The eigenvectors are orthogonal to each other. And they're actually all of norm 1. So now, I know that this is indeed a direction. And so when I look at v1 transpose Xi, I'm really measuring exactly this length. And what is this length? It's the length of the projection of Xi onto this line. That's the line that's spanned by v1. So if I had a very high dimensional problem and I started to look at the direction v1-- let's say v1 now is not a eigenvector, it's any direction-- then if I want to do this lower dimensional projection, then I have to understand how those Xi's project onto the line that's spanned by v1, because this is all that I'm going to be keeping at the end of the day about Xi's. So what we want is to find the direction where those Xi's, those projections, have a lot of variance. And we know that the variance of Xi on this direction is actually exactly given by lambda 1. Sorry, that's the empirical var-- yeah, I should call variance hat. That's the empirical variance. Everything is in empirical here. We're talking about the empirical covariance matrix. And so I also have that lambda 2 is the empirical variance of when I project Xi onto v2, which is the second one, just for exactly this reason. Any question? So lambda j's are going to be important for us. Lambda j measure the spread of the points when I project them onto a line which is a one dimensional space. And so I'm going to have-- let's say I want to pick only one, I'm going to have to find the one dimensional space that carries the most variance. And I claim that v1 is the one that actually maximizes the spread. So the claim-- so for any direction, u in Rd-- and by direction, I really just mean that the norm of u is equal to 1. I need to play fair-- I'm going to compare myself to other things of lengths one, so I need to play fair and look at directions of length 1. Now, if I'm interested in the empirical variance of X1 transpose-- sorry, u transpose X1 u transpose Xn, then this thing is maximized for u equals v1, where v1 is the eigenvector associated to lambda 1 and lambda 1 is not any eigenvalues, it's the largest of all those. So it's the largest eigenvalue. So why is that true? Well, there's also a claim that for any direction u-- so that's 1 and 2-- the variance of u transpose X-- now, this is just a random variable, and I'm looking about the true variance-- this is maximized for u equals, let's call it w1, where w1 is the eigenvector of sigma-- Now, I'm talking about the true variance. Whereas, here, I was talking about the empirical variance. So the true variance is the eigenvectors of the true sigma associated to the largest eigenvalue of sigma. So I did not give it a name. Here, that was lambda 1 for the empirical one. For the true one, you can give it another name, mu 1 if you want. But that's just the same thing. All it's saying is like, wherever I see empirical, I can remove it. So why is this claim true? Well, let's look at the second one, for example. So what is the variance of u transpose X? So that's what I want to know. So that's the expectation-- so let's assume that X is 0, again, for same reasons as before. So what is the variance? It's just the expectation of the square? I don't need to remove the expectation. And the expedition of the square is just the expectation of u transpose X. And then I'm going to write the other one X transpose u. And we know that this is deterministic. So I'm just going to take that this is just u transpose expectation of X X transpose u. And what is this guy? That's covariance sigma. That's just what sigma is. So the variance I can write as u transpose sigma u. We've made this computation before. And now what I want to claim is that this thing is actually less than the largest eigenvalue, which I actually called lambda 1 here. I should probably not. And the P is-- well, OK. Let's just pretend everything is not empirical. So now, I'm going to write sigma as P lambda 1 lambda n P transpose. That's just the eigendecomposition, where I admittedly reuse the same notation as I did for S. So I should really put some primes everywhere, so you know those are things that are actually different in practice. So this is just that the decomposition of sigma. You seem confused, Helen. You have a question? Yeah? AUDIENCE: What is-- when you talked about the empirical data and-- PHILIPPE RIGOLLET: So OK-- so I can make everything I'm saying, I can talk about either the variance or the empirical variance. And you can just add the word empirical in front of it whenever you want. The same thing works. But just for the sake of removing the confusion, let's just do it again with S. So I'm just going to do everything with S. So I'm going to assume that X bar is equal to 0. And here, I'm going to talk about the empirical variance, which is just 1/n sum from i equal 1 to n of u transpose Xi squared. So it's the same thing. Everywhere you see an expectation, you just put in average. And then I get 1/n sum from i equal 1 to n of Xi Xi transpose. And now, I'm going to call this guy S, because that's what it is. So this is u transpose Su. But just defined that I could just replace the expectation by averages everywhere, you can tell that the thing is going to work for either one or the other. So now, this thing was actually-- so now, I don't have any problem with my notation. This is actually the decomposition of S. That's just the spectral decomposition and it's to its eigenvalues. And so now, what I have is that when I look at u transpose Su, this is actually equal to P u transpose S Pu. OK. There's a transpose somewhere. That's this guy. And that's this guy. Now-- sorry, that's not P, that's D. That's D, that's this diagonal matrix. Let's look at this thing. And let's call P transpose u, let's call it b. So that's also a vector in Rd. What is it? It's just, I take a unit vector, and then I apply P transpose to it. So that's basically what happens to a unit vector when I apply the same change of basis that I did. So I'm just changing my orthogonal system the same way I did for the other ones. So what's happening when I write this? Well, now I have that u transpose Su is b transpose Db. But now, doing b transpose Db when D is diagonal and b is a vector is a very simple thing. I can expand it. This is what? This is just the sum from j equal 1 to d of lambda j bj squared. So that's just like matrix vector multiplication. And in particular, I know that the largest of those guys is lambda 1 and those guys are all non-negative. So this thing is actually less than lambda 1 times the sum from j equal 1 to d of lambda j squared-- sorry, bj squared. And this is just the norm of b squared. So if I want to prove what's on the slide, all I need to check is that b has norm, which is-- AUDIENCE: 1. PHILIPPE RIGOLLET: At most, 1. It's going to be at most 1. Why? Well, because b is really just a change of basis for u. And so if I take a vector, I'm just changing its basis. I'm certainly not changing its length-- think of a rotation, and I can also flip it, but think of a rotation-- well, actually, for vector, it's just going to be a rotation. And so now, what I have I just have to check that the norm of b squared is equal to what? Well, it's equal to the norm of P transpose u squared, which is equal to u transpose P P transpose u. But P is orthogonal. So this thing is actually just the identity. So that's just u transpose u, which is equal to the norm u squared, which is equal to 1, because I took u to have norm 1 in the first place. And so this-- you're right-- was actually of norm equal to 1. I just needed to have it less, but it's equal. And so what I'm left with is that this thing is actually equal to lambda 1. So I know that for every u that I pick-- that has norm-- So I'm just reminding you that u here has norm squared equal to 1. For every u that I pick, this u transpose Su is at mostly lambda 1. So that's the u transpose Su is at most lambda 1. And we know that that's the variance, that's the empirical variance, when I project my points onto direction spanned by u. So now, I have an empirical variance, which is at most lambda 1. But I also know that if I take u to be something very specific-- I mean, it was on the previous board-- if I take u to be equal to v1, then this thing is actually not an inequality, this is an equality. And the reason is, when I actually take u to be v1, all of these bj's are going to be 0, except for the one that's b1, which is itself equal to 1. So I mean, we can briefly check this. But if I take v-- if u is equal to v1, what I have is that u transpose Su is equal to P transpose v1 D P transpose v1. But what is P transpose v1? Well, remember P transpose is just the matrix that has vectors v1 transpose here, v2 transpose here, all the way to vd transpose here. And we know that when I take vj transpose vk, I get 0, if j is different from k. And if j is equal to k, I get 1. So P transpose v1 is equal to what? Take v1 here and multiply it. So the first coordinate is going to be v1 transpose v1, which is 1. The second coordinate is going to be v2 transpose v1, which is 0. And so I get 0's all the way, right? So that means that this thing here is really just the vector 1, 0, 0. And here, this is just the vector 1, 0, 0. So when I multiply it with this guy, I am only picking up the top left element of D, which is lambda 1. So for every one, it's less lambda 1. And for v1, it's equal to lambda 1, which means that it's maximized for a equals v1. And that's where I said that this is the fanciest non-convex problem we know how to solve. This was a problem that was definitely non-convex. We were maximizing a convex function over a sphere. But we know that v1, which is something-- I mean, of course, you still have to believe me that you can compute the spectral decomposition efficiently-- but essentially, if you've taken linear algebra, you know that you can diagonalize a matrix. And so you get that v1 is just the maximum. So you can find your maximum just by looking at the spectral decomposition. You don't have to do any optimization or anything like this. So let's recap. Where are we? We've established that if I start with my empirical covariance matrix, I can diagonalize it and PD P transpose. And then if I take the eigenvector associated to the largest eigenvalues-- so if I permute the columns of P and of D's in such a way that they are ordered from the largest to the smallest when I look at the diagonal elements of D, then if I pick the first column of P, it's v1. And v1 is the direction on which, if I project my points, they are going to carry the most empirical variance. Well, that's a good way. If I told you, pick one direction along which if you were to project your points they would be as spread out as possible, that's probably the one you would pick. And so that's exactly what PCA is doing for us. It says, OK, if you ask me to take d prime equal to 1, I will take v1. I will just take the direction that's spanned by v1. And that's just when I come back to this picture that was here before, this is v1. Of course, here, I only have two of them. So v2 has to be this guy, or this guy, or I mean or this thing. I mean, I don't know them up to sine. But then if I have three-- think of like an olive in three dimensions-- then maybe I have one direction that's slightly more elongated than the other one. And so I'm going to pick the second one. And so the procedure is to say, well, first, I'm going to pick v1 the same way I pick v1 in the first place. So the first direction I am taking is the leading eigenvector. And then I'm looking for a direction. Well, if I found one-- the one I'm going to want to find-- if you say you can take d equal 2, you're going to need the basis for this guy. So the second one has to be orthogonal to the first one you've already picked. And so the second one you pick is the one that's just, among all those that are orthogonal to v1, maximized the empirical variance when you project onto it. And it turns out that this is actually exactly v2. You don't have to redo anything again. You're eigendecomposition, this is just the second column of P. Clearly, v2 is orthogonal to v1. We just used it here. This 0 here just says this v2 is orthogonal to v1. So they're like this. And now, what I said-- what this slide tells you extra-- is that v2 among all those directions that are orthogonal-- I mean, there's still d minus 1 of them-- this is the one that maximizes the, say, residual empirical variance-- the one that was not explained by the first direction that you picked. And you can check that. I mean, it's becoming a bit more cumbersome to write down, but you can check that. If you're not convinced, please raise your concern. I mean, basically, one way you view this to-- I mean, you're not really dropping a coordinate, because v1 is not a coordinate. But let's assume actually for simplicity that v1 was actually equal to e1, that the direction that carries the most variance is the one that just says, just look at the first coordinate of X. So if that was the case, then clearly the orthogonal directions are the ones that comprise only of the coordinates 2 to d. So you could actually just drop the first coordinate and do the same thing on a slightly shorter vector of length d minus 1. And then you would just look at the largest eigenvector of these guys, et cetera, et cetera. So in a way, that's what's happening, except that you rotate it before you actually do this. And that's exactly what's happening. So what we put together here is essentially three things. One was statistics. Statistics says, if you won't spread, if you want information, you should be looking at variance. The second one was optimization. Optimization said, well, if you want to maximize spread, well, you have to maximize variance in a certain direction. And that means maximizing over the sphere of vectors that have unique norm. And that's an optimization problem, which actually turned out to be difficult. But then the third thing that we use to solve this problem was linear algebra. Linear algebra said, well, it looks like it's a difficult optimization problem. But it turns out that the answer comes in almost-- I mean, it's not a closed form, but those things are so used, that it's almost a closed form-- says, just pick the eigenvectors in order of their associated eigenvalues from largest to smallest. And that's why principal component analysis has been so popular and has gained huge amount of traction since we had computers that were allowed to compute eigenvalues and eigenvectors for matrices of gigantic sizes. You can actually do that. If I give you-- I don't know, this Google video, for example, is talking about words. They want to do just the, say, principal component analysis of words. So I give you all the words in the dictionary. And-- sorry, well, you would have to have a representation for words, so it's a little more difficult. But how do I do this? Let's say, for example, pages of a book. I want to understand the pages of a book. And I need to turn it into a number. And a page of a book is basically the word count. So I just count the number of times "the" shows up, the number of times "and" shows up, number of times "dog" shows up. And so that gives me a vector. It's in pretty high dimensions. It's as many dimensions as there are words in the dictionary. And now, I want to visualize how those pages get together-- are two pages very similar or not. And so what you would do is essentially just compute the largest eigenvector of this matrix-- maybe the two largest-- and then project this into a plane. Yeah. AUDIENCE: Can we assume the number of points was far larger than the dimension? PHILIPPE RIGOLLET: Yeah, but there's many pages in the world. There's probably more pages in the world than there's words in the dictionary. Yeah, so of course, if you are in high dimensions and you don't have enough points, it's going to be clearly an issue. If you have two points, then the leading eigenvector is going to be just the line that goes through those two points, regardless of what the dimension is. And clearly, you're not learning anything. So you have to pick, say, the k largest one. If you go all the way, you're just reordering your thing, and you're not actually gaining anything. You start from d and you go too d. So at some point, this procedure has to stop. And let's say it stops at k. Now, of course, you should ask me a question, which is, how do you choose k? So that's, of course, a natural question. Probably the basic answer is just pick k equals 3, because you can actually visualize it. But what happens if I take k is equal to 4? If I take is equal to 4, I'm not going to be able to plot points in four dimensions. Well, I could, I could add color, or I could try to be a little smart about it. But it's actually quite difficult. And so what people tend to do, if you have four dimensions, they actually do a bunch of two dimensional plots. And that's what a computer-- a computer is not very good-- I mean, by default, they don't spit out three dimensional plots. So let's say they want to plot only two dimensional things. So they're going to take the first directions of, say, v1, v2. Let's say you have three, but you want to have only two dimensional plots. And then it's going to do v1, v3; and then v2, v3. So really, you take all three of them, but it's really just showing you all choices of pairs of those guys. So if you were to keep k is equal to 5, you would have five, choose two different plots. So this is the actual principal component algorithm, how it's implemented. And it's actually fairly simple. I mean, it looks like there's lots of steps. But really, there's only one that's important. So the first one is the input. I give you a bunch of points, x1 to xn in d dimensions. And step two is, well, compute their empirical covariance matrix S. The points themselves, we don't really care. We care about their empirical covariance matrix. So it's a d by d matrix. Now, I'm going to feed that. And that's where the actual computation starts happening. I'm going to feed that to something that knows how to diagonalize this matrix. And you have to trust me, if I want to compute the k largest eigenvalues and my matrix is d by d, it's going to take me about k times d squared operations. So if I want only three, it's 3 times d squared, which is about-- d squared is the time for me it takes to just even read the matrix sigma. So that's not too bad. So what it's going to spit out, of course, is the diagonal matrix D. And those are nice, because they allow me to tell me what is the order in which I should be taking the columns of P. But what's really important to me is v1 to vd, because those are going to be the ones I'm going to be using to draw those plots. And now, I'm going to say, OK, I need to actually choose some set k. And I'm going to basically truncate and look only at the first k columns of P. Once I have those columns, what I want to do is to project onto the linear span of those columns. And there's actually a simple way to do this, which is just take this matrix P, which is really the matrix of projection onto the linear span of those k columns. And you just take Pk transpose. And then you apply this to every single one of your points. Now Pk transpose, what is the size of the matrix Pk? Yeah, [INAUDIBLE]? AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: So Pk is just this matrix. I take the v1 and I stop at vk-- well-- AUDIENCE: [INAUDIBLE] PHILIPPE RIGOLLET: d by k, right? Each of the column is an eigenvector. It's of dimension d. I mean, that's a vector in the original space. So I have this d by k matrix. So all it is is if I had my-- well, I'm going to talk in a second about Pk transpose. Pk transpose is just this guy, where I stop at the k-th vector. So Pk transpose is k by d. So now, when I take Yi, which is Pk transpose Xi, I end up with a point which is in k dimensions. I have only k coordinates. So I took every single one of my original points Xi, which had d coordinates, and I turned it into a point that has only k coordinates. Particularly, I could have k is equal to 2. This matrix is exactly the one that projects. If you think about it for one second, this is just the matrix that says-- well, we actually did that several times. The matrix, so that was this P transpose u that showed up somewhere. And so that's just the matrix that take your point X in, say, three dimensions, and then just project it down to two dimensions. And that's just-- it goes to the closest point in the subspace. Now, here, the floor is flat. But we can pick any subspace we want, depending on what the lambdas are. So the lambdas were important for us to be able to identify which columns to pick. The fact that we assumed that they were ordered tells us that we can pick the first ones. If they were not ordered, it would be just a subset of the columns, depending on what the size of the eigenvalue is. So each column is labeled. And so then, of course, we still have this question of, how do I pick k? So there's definitely the matter of convenience. Maybe 2 is convenient. If it works for 2, you don't have to go any farther. But you might want to say, well-- originally, I did that to actually keep as much information as possible. I know that the ultimate thing is to keep as much information, which would be to k is equal d-- that's as much information as you want. But it's essentially the same question about, well, if I want to compress a JPEG image, how much information should I keep so it's still visible? And so there's some rules for that. But none of them is actually really a science. So it's really a matter of what you think is actually tolerable. And we're just going to start replacing this choice by maybe another parameter. So here, we're going to basically replace k by alpha, and so we just do stuff. So the first one that people do that is probably the most popular one-- OK, the most popular one is definitely take k is equal to 2 or 3, because it's just convenient to visualize. The second most popular one is the scree plot. So the scree plot-- remember, I have my values, lambda j's. And I've chosen the lambda j's to decrease. So the indices are chosen in such a way that lambda is a decreasing function. So I have lambda 1, and let's say it's this guy here. And then I have lambda 2, and let's say it's this guy here. And then I have lambda 3, and let's say it's this guy here, lambda 4, lambda 5, lambda 6. And all I care about is that this thing decreases. The scree plot says something like this-- if there's an inflection point, meaning that you can sort of do something like this and then something like this, you should stop at 3. That's what the scree plot tells you. What it's saying in a way is that the percentage of the marginal increment of explained variance that you get starts to decrease after you pass this inflection point. So let's see why I way this. Well, here, what I have-- so this ratio that you see there is actually the percentage of explained variance. So what it means is that, if I look at lambda 1 plus lambda k, and then I divide by lambda 1 plus lambda d, well, what is this? Well, this lambda 1 plus lambda d is the total amount of variance that I get in my points. That's the trace of sigma. So that's the variance in the first direction plus the variance in the second direction plus the variance in the third direction. That's basically all the variance that I have possible. Now, this is the variance that I kept in the first direction. This is the variance that I kept in the second direction, all the way to the variance that I kept in the k-th direction. So I know that this number is always less than or equal to 1. And it's larger than 1. And this is just the proportion, say, of variance explained by v1 to vk, or simply, the proportion of explained variance by my PCA, say. So now, what this thing is telling me, its says, well, if I look at this thing and I start seeing this inflection point, it's saying, oh, here, you're gaining a lot and lot of variance. And then at some point, you stop gaining a lot in your proportion of explained variance. So this will translate in something where when I look at this ratio, lambda 1 plus lambda k divided by lambda 1 plus lambda d, this would translate into a function that would look like this. And what it's telling you, it says, well, maybe you should stop here, because here every time you add one, you don't get as much as you did before. You actually get like smaller marginal returns. So explained variance is the numerator of this ratio. And the total variance is the denominator. Those are pretty straightforward terms that you would want to use for this. So if your goal is to do data visualization-- so why would you take k larger than 2? Let's say, if you take k larger than 6, you can start to imagine that you're going to have six, choose two, which starts to be annoying. And if you have k is equal to 10-- because you could start in dimension 50,000-- and then k equal to 10 would be the place where you have this thing that's a lot of plots that you would have to show. So it's not always for data visualization. Once I've actually done this, I've actually effectively reduced the dimension of my problem. And what I could do with what I have is do a regression on those guys. The v1-- so I forgot to tell you-- why is that called principal component analysis? Well, the vj's that I keep, v1 to vk are called principal components. And they effectively act as the summary of my Xi's. When I mentioned image compression, I started with a point Xi that was d numbers-- let's say 50,000 numbers. And now, I'm saying, actually, you can throw out those 50,000 numbers. If you actually know only the k numbers that you need-- the 6 numbers that you need-- you're going to have something that was pretty close to getting what information you had. So in a way, there is some form of compression that's going on here. And what you can do is that those principal components, you can actually use now for regression. If I want to regress Y onto X that's very high dimensional, before I do this, if I don't have enough points, maybe what I can actually do is to do principal component analysis throughout my exercise, replace them by those compressed versions, and do linear aggression on those guys. And that's called principal component regression, not surprisingly. And that's something that's pretty popular. And you can do with k is equal to 10, for example. So for data visualization, I did not find a Thanksgiving themed picture. But I found one that has turkey in it. Get it? So this is actually a gene data set that was-- so when you see something like this, you can imagine that someone has been preprocessing the hell out of this thing. This is not like, oh, I collect data on 23andMe and I'm just going to run PCA on this. It just doesn't happen like that. And so what happened is that-- so let's assume that this was a bunch of preprocessed data, which are gene expression levels-- so 500,000 genes among 1,400 Europeans. So here, I actually have less observations than I have samples. And that's when you use principal component regression most of the time, so it doesn't stop you. And then what you do is you say, OK, have those 500,000 genes among-- so here, that means that there's 1,400 points here. And I actually take those 500,000 directions. So each person has a vector of, say, 500,000 genes that are attached to them. And I project them onto two dimensions, which should be extremely lossy. I lose a lot of information. And indeed, I do, because I'm one of these guys. And I'm pretty sure I'm very different from this guy, even though probably from an American perspective, we're all the same. But I think we have like slightly different genomes. And so the thing is now we have this-- so you see there's lots of Swiss that participate in this. But actually, those two principal components recover sort of the map of Europe. I mean, OK, again, this is actually maybe fine-grained for you guys. But right here, there's Portugal and Spain, which are those colors. So here is color-coded. And here is Turkey, of course, which we know has very different genomes. So Turks are very at the boundary. So you can see all the greens. They stay very far apart from everything else. And then the rest here is pretty mixed. But it sort of recovers-- if you look at the colors, it sort of recovers that. So in a way, those two principal components are just the geographic feature. So if you insist to compress all the genomic information of these people into two numbers, what you're actually going to get is longitude and latitude, which is somewhat surprising, but not so much if you think that's it's been preprocessed. So what do you do beyond practice? Well, you could try to actually study those things. If you think about it for a second, we did not do any statistics. I talked to you about IID observations, but we never used the fact that they were independent. The way we typically use independence is to have central limit theorem, maybe. I mentioned the fact that the covariances of the word Gaussian would actually give me something which is independent. We didn't care. This was a data analysis, data mining process that we did. I give you points, and you just put them through the crank. There was an algorithm in six steps. And you just put it through and that's what you got. Now, of course, there's some work which studies says, OK, if my data is actually generated from some process-- maybe, my points are multivariate Gaussian with some structure on the covariance-- how well am I recovering the covariance structure? And that's where statistics kicks in. And that's where we stop. So this is actually a bit more difficult to study. But in a way, it's not entirely satisfactory, because we could work for a couple of boards and I would just basically sort of reverse engineer this and find some models under which it's a good idea to do that. And what are those models? Well, those are the models that sort of give you sort of prominent directions that you want to find. And it will say, yes, if you have enough observations, you will find those directions along which your data is elongated. So that's essentially what you want to do. So that's exactly what this thing is telling you. So where does the statistics lie from? Well, everything, remember-- so actually that's where Alana was confused-- the idea was to say, well, if I have a true covariance matrix sigma and I never really have access to it, I'm just running PCA on the empirical covariance matrix, how do those results relate? And this is something that you can study. So for example, if n goes to infinity and the number of points, your dimension, is fixed, then S goes to sigma in any sense you want. Maybe each entry is going to each entry of sigma, for example. So S is a good estimator. We know that the empirical covariance is a consistent as the mater. And if d is fixed, this is actually not an issue. So in particular, if you run PCA on the sample covariance matrix, you look at, say, v1, then v1 is going to converge to the largest eigenvector of sigma as n goes to infinity, but for d fixed. And that's a story that we know since the '60s. More recently, people have started challenging this. Because what's happening when you fix the dimension and let the sample size go to infinity, you're certainly not allowing for this. It's certainly not explaining to you anything about the fact when d is equal to 500,000 and n is equal to 1,400. Because when d is fixed and n goes to infinity, in particular, n is much larger than d, which is not the case here. And so when n is much larger than d, things go well. But if d is less than n, it's not clear what happens. And particularly, if d is of the order of n, what's happening? So there's an entire theory in mathematics that's called random matrix theory that studies the behavior of exactly this question-- what is the behavior of the spectrum-- the eigenvalues and eigenvectors-- of a matrix in which I put random numbers and I let-- so the matrix I'm interested in here is the matrix of X's. When I stack all my X's next to each other, so that's a matrix of size, say, d by n, so each column is of size d, it's one person. And so I put them. And when I let the matrix go to infinity, I let both d and n to infinity. But I want the aspect ratio, d/n, to go to some constant. That's what they do. And what's nice is that in the end, you have this constant-- let's call it gamma-- that shows up in all the asymptotics. And then you can replace it by d/n. And you know that you still have a handle of both the dimension and the sample size. Whereas, usually the dimension goes away, as you let n go to infinity without having dimension going to infinity. And so now, when this happens, as soon as d/n goes to a constant, you can show that essentially there's an angle between the largest eigenvector of sigma and the largest eigenvector of S, as n and d go to infinity. There is always an angle-- you can actually write it explicitly. And it's an angle that depends on this ratio, gamma-- the asymptotic ratio of d/n. And so there's been a lot of understanding how to correct, how to pay attention to this. This creates some biases that were sort of overlooked before. In particular, when I do this, this is not the proportion of explained variance, when n and d are similar. This is an estimated number computed from S. This is computed from S. All these guys are computed from S. So those are actually not exactly where you want them to be. And there's some nice work that allows you to recalibrate what this ratio should be, how this ratio should be computed, so it's a better representative of what the proportion of explained variance actually is. So then, of course, there's the question of-- so that's when d/n goes to some constant. So the best case-- so that was '60s-- d is fixed and it's much larger than d. And then random matrix theory tells you, well, d and n are sort of the same order of magnitude. When they go to infinity, the ratio goes to some constant. Think of it as being order 1. To be fair, if d is 100 times larger than n, it still works. And it depends on what you think what the infinity is at this point. But I think the random matrix theory results are very useful. But then even in this case, I told you that the leading eigenvector of S is actually an angle of the leading eigenvector of-- So what's happening is that-- so let's say that d/n goes to some gamma. And what I claim is that, if you look at-- so that's v1, that's the v1 of S. And then there's the v1 of-- so this should be of size 1. So that's the v1 of sigma. Then those things are going to have an angle, which is some function of gamma. It's complicated, but there's a function of gamma that you can see there. And there's some models. When gamma goes to infinity, which means that d is now much larger than n, this angle is 90 degrees, which means that you're getting nothing. Yeah. AUDIENCE: If d is not on your lower plane, so like gamma is 0, is there still angle? PHILIPPE RIGOLLET: No, but that's consistent-- the fact that it's consistent when-- so the angle is a function-- AUDIENCE: d is not a constant [INAUDIBLE]?? PHILIPPE RIGOLLET: d is not a constant? So if d is little of n? Then gamma goes to 0 and f of gamma goes to 0. So f of gamma is a function that-- so for example, if f of gamma-- this is the sine of the angle, for example-- then it's a function that starts at 0, and that goes like this. But as soon as gamma is positive, it goes away from 0. So now when gamma goes to infinity, then this thing goes to a right angle, which means I'm getting just junk. So this is not my leading eigenvector. So how do you do this? Well, just like everywhere in statistics, you have to just make more assumptions. You have to assume that you're not looking for the leading eigenvector or the direction that carries the most variance. But you're looking, maybe, for a special direction. And that's what sparse PCA is doing. Sparse PCA is saying, I'm not looking for any direction new that carries the most variance. I'm only looking for a direction new that is sparse. Think of it, for example, as having 10 non-zero coordinates. So that's a lot of directions still to look for. But once you do this, then you actually have not only-- there's a few things that actually you get from doing this. The first one is you actually essentially replace d by k, which means that n now just-- I'm sorry, let's say S non-zero coefficients. You replace d by S, which means that n only has to be much larger than S for this thing to actually work. Now, of course, you've set your goal weaker. Your goal is not to find any direction, only a sparse direction. But there's something very valuable about sparse directions, is that they actually are interpretable. When I found the v-- let's say that the v that I found before was 0.2, and then 0.9, and then 1.1 minus 3, et cetera. So that was the coordinates of my leading eigenvector in the original coordinate system. What does it mean? Well, it means that if I see a large number, that means that this v is very close-- so that's my original coordinate system. Let's call it e1 and e2. So that's just 1, 0; and then 0, 1. Then clearly, from the coordinates of v, I can tell if my v is like this, or it's like this, or it's like this. Well, I mean, they should all be of the same size. So I can tell if it's here or here or here, depending on-- like here, that means I'm going to see something where the Y-coordinate it much larger than the X-coordinate. Here, I'm going to see something where the X-coordinate is much larger than the Y-coordinate. And here, I'm going to see something where the X-coordinate is about the same size of the Y-coordinate. So when things starts to be bigger, you're going to have to make choices. What does it mean to be bigger-- when d is 100,000, I mean, the sum of the squares of those guys have to be equal to 1. So they're all very small numbers. And so it's hard for you to tell which one is a big number and which ones is a small number. Why would you want to know this? Because it's actually telling you that if v is very close to e1, then that means that e1-- in the case of the gene example, that would mean that e1 is the gene that's very important. Maybe there's actually just two genes that explain those two things. And those are the genes that have been picked up. There's two genes that I encode geographic location, and that's it. And so it's very important for you to be able to interpret what v means. Where it has large values, it means that maybe it has large values for e1, e2, and e3. And it means that it's a combination of e1, e2, and e3. And now, you can interpret, because you have only three variables to find. And so sparse PCA builds that in. Sparse PCA says, listen, I'm going to want to have at most 10 non-zero coefficients. And the rest, I want to be 0. I want to be able to be a combination of at most 10 of my original variables. And now, I can do interpretation. So the problem with sparse PCA is that it becomes very difficult numerically to solve this problem. I can write it. So the problem is simply maximize the variance u transpose, say, Su subject to-- well, I wanted to have u2 equal to 1. So that's the original PCA. But now, I also want that the sum of the indicators of the uj that are not equal to 0 is at most, say, 10. This constraint is very non-convex. So I can relax it to a convex one like we did for linear aggression. But now, I've totally messed up with the fact that I could use linear algebra to solve this problem. And so now, you have to go through much more complicated optimization techniques, which are called semidefinite programs, which do not scale well in high dimensions. And so you have to do a bunch of tricks-- numerical tricks. But there are some packages that implements some heuristics or some other things-- iterative thresholding, all sorts of various numerical tricks that you can do. But the problem they are trying to solve is exactly this. Among all directions that I have norm 1, of course, because it's the direction that have at most, say, 10 non-zero coordinates, I want to find the one that maximizes the empirical variance. Actually, let me let me just so you this. I wanted to show you an output of PCA where people are actually trying to do directly-- maybe-- there you go. So right here, you see this is SPSS. That's a statistical software. And this is an output that was preprocessed by a professional-- not preprocessed, post-processed. So that's something where they read PCA. So what is the data? This is raw data about you ask doctors what they think of the behavior of a particular sales representative for pharmaceutical companies. So pharmaceutical companies are trying to improve their sales force. And they're asking doctors how would they rate-- what do they value about their interaction with a sales representative. So basically, there's a bunch of questions. One offers credible point of view on something trends, provides valuable networking opportunities. This is one question. Rate this on a scale from 1 to 5. That was the question. And they had a bunch of questions like this. And then they asked 1,000 doctors to make those ratings. And what they want-- so each doctor now is a vector of ratings. And they want to know if there's different groups of doctors, what do doctors respond to. If there's different groups, then maybe they know that they can actually address them separately, et cetera. And so to do that, of course, there's lots of questions. And so what you want is to just first project into lower dimensions, so you can actually visualize what's going on. And this is what was done for this. So these are the three first principal component that came out. And even though we ordered the values of the lambdas, there's no reason why the entries of v should be ordered. And if you look at the values of v here, they look like they're pretty much ordered. It starts at 0.784, and then you're at 0.3 around here. There's something that goes up again, and then you go down. Actually, it's marked in red every time it goes up again. And so now, what they did is they said, OK, I need to interpret those guys. I need to tell you what this is. If you tell me, we found the principal component that really discriminates the doctors in two groups, the drug company is going to come back to you and say, OK, what is this characteristic? And you say, oh, it's actually a linear combination of 40 characteristics. And they say, well, we don't need you to do that. I mean, it cannot be a linear combination of anything you didn't ask. And so for that, first of all, there's a post-processing of PCA, which says, OK, once I actually, say, found three principal components, that means that I found the dimension three space on which I want to project my points. In this base, I can pick any direction I want. So the first thing is that you do some sort of local arrangements, so that those things look like they are increasing and then decreasing. So you just change, you rotate your coordinate system in this three dimensional space that you've actually isolated. And so once you do this, the reason to do that is that it sort of makes them big, sharp differences between large and small values of the coordinates of the thing you had. And why do you want this? Because now, you can say, well, I'm going to start looking at the ones that have large values. And what do they say? They say in-depth knowledge, in-depth knowledge, in-depth knowledge, knowledge about. This thing is clearly something that actually characterizes the knowledge of my sales representative. And so that's something that doctors are sensitive to. That's something that really discriminates the doctors in a way. There's lots of variance along those things, or at least a lot of variance-- I mean, doctors are separate in terms of their experience with respect to this. And so what they did is said, OK, all these guys, some of those they have large values, but I don't know how to interpret them. And so I'm just going to put the first block, and I'm going to call it medical knowledge, because all those things are knowledge about medical stuff. Then here, I didn't know how to interpret those guys. But those guys, there's a big clump of large coordinates, and they're about respectful of my time, listens, friendly but courteous. This is all about the quality of interaction. So this block was actually called quality of interaction. And then there was a third block, which you can tell starts to be spreading a little thin. There's just much less of them. But this thing was actually called fair and critical opinion. And so now, you have three discriminating directions. And you can actually give them a name. Wouldn't it be beautiful if all the numbers in the gray box came non-zero and all the other numbers came zero-- there was no ad hoc choice. I mean, this is probably an afternoon of work to like scratch out all these numbers and put all these color codes, et cetera. Whereas, you could just have something that tells you, OK, here are the non-zeros. If you can actually make a story around why this group of thing actually makes sense, such as it is medical knowledge, then good for you. Otherwise, you could just say, I can't. And that's what sparse PCA does for you. Sparse PCA outputs something where all those numbers would be zero. And there would be exactly, say, 10 non-zero coordinates. And you can turn this knob off 10. You can make it 9. Depending on what your major is, maybe you can actually go on with 20 of them and have the ability to tell the story about 20 different variables and how they fit in the same group. And depending on how you feel, it's easy to rerun the PCA depending on the value that you want here. And so you could actually just come up with the one you prefer. And so that's the sparse PCA thing which I'm trying to promote. I mean, this is not super well-spread. It's a fairly new idea, maybe at most 10 years old. And it's not completely well-spread in statistical packages. But that's clearly what people are trying to emulate currently. Yes? AUDIENCE: So what exactly does it mean that the doctors have a lot of variance in medical knowledge, quality of interaction, and fair and critical opinion? Like, it was saying that these are like the main things that doctors vary on, some doctors care. Like we could sort of characterize a doctor by, oh, he cares this much about medical knowledge, this much about the quality of interaction, and this much about critical opinion. And that says most of the story about what this doctor wants from a drug representative? PHILIPPE RIGOLLET: Not really. I mean, OK, let's say you pick only one. So that means that you would take all your doctors, and you would have one direction, which is quality of interaction. And there would be just spread out points here. So there are two things that can happen. The first one is that there's a clump here, and then there's a clump here. That still represents a lot of variance. And if this happens, you probably want to go back in your data and see were these people visited by a different group than these people, or maybe these people have a different specialty. I mean, you have to look back at your data and try to understand why you would have different groups of people. And if it's like completely evenly spread out, then all it's saying is that, if you want to have a uniform quality of interaction, you need to take measures on this. You need to have this to not be discrimination. But I think really when it's becoming interesting it's not when it's complete spread out. It's when there's a big group here. And then there's almost no one here, and then there's a big group here. And then maybe there's something you can do. And so those two things actually give you a lot of variance. So actually, maybe I'll talk about this. Here, this is sort of a mixture. You have a mixture of two different populations of doctors. And it turns out that principal component analysis-- so a mixture is when you have different populations-- think of like two Gaussians that are just centered at two different points, and maybe they're in high dimensions. And those are clusters of people, and you want to be able to differentiate those guys. If you're in very high dimensions, it's going to be very difficult. But one of the first processing tools that people do is to do PCA. Because if you have one big group here and one big group here, it means that there's a lot of variance along the direction that goes through the centers of those groups. And that's essentially what happened here. You could think of this as being two blobs in high dimensions. But you're really just projecting them into one dimension. And this dimension, hopefully, goes through the center. And so as preprocessing-- so I'm going to stop here. But PCA is not just made for dimension reduction. It's used for mixtures, for example. It's also used when you have graphical data. What is the idea of PCA? It just says, if you have a matrix that seems to have low rank-- meaning that there's a lot of those lambda i's that are very small-- and then I see that plus noise, then it's a good idea to do PCA on this thing. And in particular, people use that in networks a lot. So you take the adjacency matrix of a graph-- well, you sort of preprocess it a little bit, so it looks nice. And then if you have, for example, two communities in there, it should look like something that is low rank plus some noise. And low rank means that there's just very few non-zero-- well, low rank means this. Low rank means that if you do the scree plot, you will see something like this, which means that if you throw out all the smaller ones, it should not really matter in the overall structure. And so you can use all-- these techniques are used everywhere these days, not just in PCA. So we call it PCA as statisticians. But people call it the spectral methods or SVD. So everyone--
MIT_904_Sensory_Systems_Fall_2013
3_The_lateral_geniculate_nucleus_and_the_visual_cortex.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon, everyone. Let us get started. The first thing we're going to do today is I'm going to finish up what I didn't get a chance to finish the last time, because we ran over a little bit. And what we are going to cover to begin with is the lateral geniculate nucleus of the thalamus. This is a structure in the thalamus that receives an extensive input from the retina from retinal ganglion cells. And it is a beautiful structure that has several layers to it. And here is a picture of one from a monkey. And this is a coronal section. I think you all know a little bit about the anatomy of the brain. If you cut slices this way, it's coronal cuts. If you cut it this way, it's sagittal cuts. OK? So this is a coronal cut. A thin slice. And it shows what the lateral geniculate nucleus looks like when it's cut about in its middle. And what you can say is a bunch of layers here. There's six layers as labeled here. And the top four layers are called the parvocellular layers. And the bottom two are called the magnocellular layers. And the reason they have these names is because the cells-- this was stained. You can't see it that clearly. But the cells are much bigger in the bottom two layers than in the top four layers. And that's why these are called parvo, and these are called magno. Now, one of the important discoveries that had been made, and I'll belabor that in a minute, is that these top four layers get input from the midget cells of the retina, and the bottom two get input from the parasol cells. And we'll be talking about that a great deal in the next few lectures. Now, each of these layers gets input from one eye. And there's an alternation. If you go from layer six, this is input for a contralateral line. Ipsilateral. Contralateral. Ipsilateral. Then there is a reversal. And this layer is again ipsilateral. And this layer is contralateral. So that is the basic layout. And I might as well anticipate to tell you that if you study the receptive field organization of these cells, they are very similar to those that you see in the retina. You have midget cells. Parasol cells. You have center-surround antagonism. So that there is not much of a transform at this level in the brain, as the inputs progress from the retina to the lateral geniculate nucleus. But once we get up to the cortex, we will see many major transforms that I will come to shortly. Now, the layout of the lateral geniculate nucleus varies quite a bit from species to species. In the market we have this kind of arrangement. And this is very similar to the kind of arrangement we have in our own heads. But then if you look at-- well, let me make one more point here about this. There's a beautiful experiment that's been done to confirm that indeed the inputs to the parvo and magnocellular layers are coming from different cells in the retina. And so an experiment was done in which a labeling material was put into either the parvocellular layers or the magnocellular layers, as you can see here on the right side. Then those labeling substances were transported back to the retina. And they labeled the cells that project to the axons into these two regions. And what you can see here is that these are smaller cells that project to the parvocellular layers, and much bigger cells that project to the magnocellular layers. And these correspond, as other studies have shown in more detail, to the parasol cells. And these correspond to the midget cells. So that's an important distinction here of the inputs. And the other thing that's important to realize here is that we talked about how the eye is organizing, and the way it projects to the nasal and temple parts of the retina. The left lateral geniculate nucleus sees the right half of the visual field, and right geniculate sees the left half of the visual field. Now, let me say just a few words about the fact that there are different kinds of geniculates. But they're not that different. So here's an example of a tree shrew lateral geniculate nucleus. It's shaped differently, but it still has these nice six layers. And the bottom two layers again are magnocellular, and the top four layers are essentially parvocellular. But then when recordings were made, an interesting specialization was seen in this animal, which was not that obvious in others. If you look at this here, examine the question to what degree are the cells in these top four layers on or off cells. And what you see here, that when you look at layers three, four, five, and six, there's a huge distinction here. Virtually all the cells in three and four are off cells, and all the cells in five and six are on cells. So there's a specialization in the layers as to whether the input is from the on cells or off cells. And yet another animal has a different kind of arrangement. I'm not going to go into more details about this, but again there's six layers here. But here they don't seem to have a distinction between magno and parvocellular layers. Instead we have two layers, five and four, where the cells are very small. And the rest of the layers are quite large. This is the galago. All right? Now most thoroughly studied had been the cat, and the monkey. And that's the kind of stuff we are going to talk about more as we progress in the course. All right. So that in a very summary form is the essence of what that lateral geniculate nucleus is like. Now, what I want to do is I'm going to have a quick summary of what I've covered in the first session. And here, first of all again, is a drawing of what the cells look like in the retina. These are the photoreceptors. Remember the light is coming in from the bottom up. OK? And so if you look at that, we find that all the photoreceptors hyperpolarize the light, and they produce only graded potentials. My last important fact is that all these cells use glutamate as the neurotransmitter that is released at the bottom here of these cells, which then innervates the subsequent elements in this retinal picture here, which are the horizontal cells and the bipolar cells. So if you look at those, the horizontal cells are all hyperpolarized to light just to the photoreceptors. And they also only produce graded potentials. But then when we proceed, and look at the bipolar cells, there are two major classes of bipolar cells that I want to deal with, and more. These are the important ones from our point of view. The so-called on and off bipolars. And that arises, because the on bipolars have signed inverting synapses with which they connect to the photoreceptors. Whereas the off bipolars have signed conserving synapses. So the way to think of this, if the photoreceptor hyperpolarizes and depolarizes, going up and down as the light goes in and out, an off cell will mimic this, and go like this. But the on cell will do the opposite. OK? So we have created in the retina, I shouldn't say we, but evolution has, from a single-ended system since they were hyperpolarized, a double-ended system at the level of the bipolar cell. And that's what then creates these famous on and off cells that we are going to talk about in a lot more detail the next time to try to figure out why do we have these cells. All right. So that then is a summary of the upper layers. And then when they come down to the amacrine cells, some of these amacrine cells produce action potentials. Some don't. There are many different kinds, as I mentioned the last time. There are more than 20 different kinds. Some are on. Some are off. Some are on-off. And some don't even give you action potentials. Then finally when you come down to the ganglion cells, they all give you action potentials. And the major two classes that we are going to deal with a lot, even though there are many more, we'll talk about some of them later, are the midget cells and the parasol cells. OK. So then to have an overall summary of what we covered the last time, first of all, we had the right brain receives input from the left visual hemifield, and the left from the right hemifield. That, you understood when I explained to you what the wiring is like in the retina. And then I pointed out, just seeing that, we have these five major classes of retinal cells. We have the photoreceptors themselves, which are the rods and the cords. And then we have the horizontal cells, the bipolar cells, the amacrine cells, and the retinal ganglion cells. Hopefully, by repeating this stuff, eventually this will stick into your brain. All right. So then the receptive fields, or the retinal ganglion cells, sometimes referred to as RGCs, have antagonistic centers around organization. And then when we look at the question of adaptation, you can explain why on earth did this evolve. This complex arrangement. Then there are several classes of retinal ganglion cells. We have talked about the on and off, the midget and parasol, and we will later on talk about several other classes. Now, all photoreceptors and horizontal cells hyperpolarize the light. At that level you have a single-ended system. But then when you come to the bipolar cells, about half of them are hyperpolarizing, and half of them are depolarizing. And the opposition arises, because the on bipolar cells have an inverting synapse, which is due to their intersynaptic junction where their receptors are made of molecules, which is called the mGluR6. And we'll talk about that in more detail next time. The action potentials in the retina are generated by only amacrine cells and retinal ganglion cells. The lateral geniculate nucleus that we have talked about is a laminated structure, but desegregating lamina varies with species just as I pointed out to you. The parvocellular layers receive input from the midget cells, and the magnocellular layers from the parasol cells in the monkey, and in the human. OK? The inputs in the left and right are segregated into lamina, as I have already pointed out to you. Lastly, the receptive few properties of lateral geniculate cells are similar to those that you see in the retina ganglion cells. You don't have any major transforms. OK. So that's then what we have an initial understanding of when it comes to the retina, and the lateral geniculate nucleus. We are now going to proceed, hang on for a minute, to the visual cortex. All right. So we are going to talk about the visual cortex. OK. So the first thing we are going to talk about that pertains to the visual cortex is V1, this initial area that gets most of the input directly from the lateral geniculate nucleus. And we're going to look at this first from an anatomical point of view. And then we're going to look at it progressively more from a functional point of view. OK. So here once again is a monkey brain. You're going to see this monkey brain over and over again, and gradually will become familiar with it. I want to point out to you that this here is a central sulcus. Just like we have a central sulcus, so do monkeys. And then this area here is called the lunate sulcus. And then in front here, we are going to talk about this region. It's called the frontal eye fields. There are two. The soft side, the principalis, and the arcuate. So you're going to see this over and over again. There's not too many things to remember here. But it will grow, because I will have to talk about more and more areas as we progress in the course. Now, here is area V1. And as I pointed out to you before, this region is in the monkey lissencephalic, meaning it's mostly flat. And because of that, its spatial layout is fairly easy to understand. So the first thing you want to do is to understand this spatial layout of the structure. And to do that people have done all sorts of experiments. Here is one that has done the spatial layout. Initially, the way this was done was kind of a difficult undertaking. They used single microelectrodes, and they would systematically move the microelectrodes here, and all across. And then they would map out where the receptive field was. OK? So if they did that, they found that in this region here the fovea is represented. And then as you progress towards the center of the brain, what you find is the center of the hemisphere I should really say. You progress to about 80 degrees out. And this is the horizontal meridian. And this below here, and above it, is the vertical meridian that goes around. OK? So what is represented, as you had already seen, you're representing each hemisphere, half the visual field. So what you have is this. OK? This is the horizontal meridian. This is the upper part. This is the lower part. OK? But that is inverted on the retina, as well is inverted on the visual cortex. All right. So that's the basic layout. But now what happened, as it always happens in science, new methods were developed to study this in more detail, and more reliably. And one of the methods that was developed is to do a combined action, and then chemical experiment, in which the eyes were stimulated. Then they were fixed, and one of the neurons were active. Took up the substance that was injected into the bloodstream, which then could be labeled subsequently. So I'm going to show you what this arrangement is. This is what the monkey is looking at. This is the half of the visual field that is being studied, because you're looking at the contralateral hemisphere. And then what is being done in these experiments is that you alternate the black and white stripes back and forth. So you can see if that's sort of a flicker. And you keep doing this for an hour or two. And as a result of this, the cells that are activated by this take up this substance that is being used, which is called 2DG, 2-Deoxyglucose. And then subsequently you can label that. So now what we are going to do, we can look at what the label looks like. And here is this beautiful example of this work. Here is a foveal area. This is about 7 or 8 degrees out. OK? So if you go back-- oops. Sorry. This outermost half disk is one that you see here. OK? So now we have a very clear understanding, which we can do quantitatively, of what the spatial layout is of the visual field on the retinal surface. And one important point that I want you to remember today, and we will discuss it again when we talk about visual prosthesis, is that much more brain area is devoted to central vision, because of course the many more cells in the fovea than in the periphery. And the cortex itself is of constant thickness. It's about roughly 2 millimeters in thickness. And so you need to give it more space in this lissencephalic brain to accept all that input from the foveal region. And so much more area is devoted to foveal vision than to peripheral vision. All right. So that then is the spatial layout. And then if we look at a cross section of the visual cortical regions here, this includes several cortical areas, this one right here, and this region here is so-called area V1. Now, an interesting discovery was made by a person called Gennari, who discovered when he used this kind of Nissl stain that when you look at V1, there seems to be almost what looks like an extra stripe, which gets labeled. And as soon as we get here, as you can see, this point, it stops here. And that defines, this extra layer, this stria of Gennari defines area 17 anatomically. If you get here, you suddenly get to area V2, so V2 starts here. So even with a simple anatomical technique, you can tell what is area V1, and area V2, and so on. OK. So now, if we take a cross section of the visual cortex, which I've said is about 2 millimeters in thickness, OK? People have divided this region, this so-called gray matter. Why is it called gray matter? It's called gray matter, because most of the cells there are not coded. Whereas if you go below, you have all the coded fibers that are coming in. So this gray matter then has been divided into six subdivisions. Layers one through six. In this case from the top down instead of in the genicular point, so from the bottom up. And these layers, subsequently, were realized they're more than six layers, really. And that's why it became 4A, 4B, 4CF, or 4C-beta. Now the 4C-alpha and beta is a region that gets a lot of input directly from the lateral geniculate nucleus. And what the nature of that input is, I'm going to show you here. Once again, here's the lateral geniculate nucleus. The six layers. The four parvocellular, and two magnocellular layers. And then what we can do, is we can trace how that connect. This has been done both physiologically and anatomically. And it shows that the four top layers terminate for the most part, almost exclusive, but not 100%, but pretty closely into layers of so-called 4C-alpha. Whereas when you come to the two bottom layers, OK? The so-called magnocellular layers, they terminate in 4C-beta. So some of these things are almost like an inversion from here to there. OK? And then another thing we haven't talked about before is that also some cells, even though they're not numerous, that reside in between the layers of the lateral geniculate nucleus, are called the koniocellular cells. But you don't have to worry about that now. You don't have to remember that at this point. But we'll talk about it later. But when you do look at them, what you find is that they project into the upper part of the visual cortex, both from the parvocellular interlaminar layers, and the magnocellular interlaminar layers. So as you might expect, things are complicated when it comes to the brain. And there are just all kinds of connections, and you're trying to make sense of it, and understand the reasons as to why we have these connections, and what the functions are. All right. So now, we are going to take a big step forward, and begin to look at the functional aspects of area V1. OK? Oops. Sorry. Hang on. Here we are. We are going to look at the receptive field organization of cells. So how do you do that? What you do is you can stick either a single electrode, or multiple electrodes into the visual cortex. And then you can map out the receptive fields to see how they respond. OK? Now, this was an interesting story in the beginning when this kind of search went on, because there were two major groups that were trying to understand what the visual responses are like in the visual cortex. And this was not even that long ago. This was in the 1930s, '40s, and '50s. So what was done in two areas, and one was in Germany, what they did there, they kind of followed in the spirit of Keffer Hartline. And they shone light into the eye, diffused light. And they recorded from V1, and they couldn't derive any cells to the extent that some people said, I think people have made a major mistake. That stuff back here, that's not a visual area. It's something else. But then, pretty much at the same time, another group, initially at Johns Hopkins, and subsequently at Harvard, did similar experiments. But what happened was, it's almost an amusing story. The people who did that were Hubel and Wiesel. And when they first went to Johns Hopkins, they worked with a person's name I've mentioned before, I think, Kuffler, who discovered center-surround antagonism. And Kuffler did experiments similar in technique, in terms of activating the eye, that had been done by Keffer Hartline, which shine light into the eye. Not necessarily diffused light, but spots of light that he would move around on the retinal surface. This was a very complicated piece of equipment. And so when Hubel and Wiesel went to become postdocs with Kuffler, they said, my god, what is this? We don't know how to work this stuff. Let's do something different, so we can handle it. And so what they decided to do, and this may sound crazy-- Let me explain one more thing. When these experiments were done in the cat, with the way the experiment was done, that the cat was put upside down on a table like this. Like that. And the light was shone into it. OK? With a piece of equipment from above, and you could even look at it through a microscope to see how that light was shone into the eye. So Hubel and Wiesel said, you can't handle this. But I tell you what we'll do instead. You'll take a bed sheet, and put it up on the ceiling. And then we'll take a projector, and move the light around like this. OK? Now, the projector they used was like an old fashioned camera. And you could put a slide in there, and move the slide in and out. OK? And so they would, first of all, just had the whole lights come on. And they kept recording, and they wouldn't get anything. They said, oh my god. Those people in Germany are right. These cells don't respond to light. What's going on? And then one day, as they were fiddling around with this, Torsten Wiesel pulled out a slide to put a different slide in. And when he pulled that slide out, they displayed the sound of the action potentials. Like that. What on earth was that? And so they did it repeated. [HIGH-PITCHED BEEPING SOUND] Like that. And guess what that resulted in? That finding [INAUDIBLE] logic that resulted in the Nobel Prize he and Hubel got, because they discovered that cells' individual cortex are orientation selective. OK? Meaning that each cell responds to a particular orientation of an edge. And this was an incredible transformation, which we saw in the retina, and what you had seen in the lateral geniculate nucleus. So what they then did, they became systematic about it, and mapped out the receptive field organization of cells. And they discovered two majors classes of cells, they called the simple cells, and the complex cells. The simple cells were distinctive in the sense, first of all, all these types of cells were orientation specific. I want to find the appropriate orientation by moving the edge around until you get the best response. Then you could do the very small spot. You could activate the cell, but not as effectively as with the ball moving, of course, but still quite well. And they found that there was a center region in this kind of simple cell that gave an on response, and a surround region that gave an off response. And then they found another class of simple cells, in which the left half in this case was on, and the right half was off. And there were variations of this, I'll mention those in a bit more detail later, indicating that the input from the on and off systems may be separate, or they [? sub ?] some interaction that makes for their specificity for orientation. Now then, they also discovered another class of cells called complex that on the whole tended to have a bit larger receptive fields. And in these cells, even when you use small spots, no matter where you stimulate it, you got both on and off responses intermingled. So you did not have a spatial separation of the on and the off responses. Now what exactly we mean by on and off responses we will discuss for the whole session next time. OK. So now, let me explain to you how you do these experiments. Once you get a little bit more sophisticated, and instead of just using a projector that you reflect the light with by hand, you can do this on a computer. OK? And that indeed was a big step forward. I would say in the early 1960s, maybe late '50s or early '60s, people became computerized as we had especially here at MIT. And the first set of computers that were used to enable us to quantitatively measure these attributes in the visual cortex were the so-called PDP-11/10s. Subsequently, PDP-11/20s. Subsequently then PDP-11/34s. Some of you may have heard about these ancient old computers that practically nobody uses anymore, because of all the advances that have been made in computer technology. So what you did then using a variety of means, I don't have to describe that in detail, is to move a bar of light across the receptive field in different orientations. So here's an example here. You first use very small spots, and you find the receptive field. And then after, because the cells do respond to very small spots of light. But they don't respond at all to larger spots. And once you've found the receptive field, what you can do is you can take a bar, and move it across it in different orientations, like this. So here's a direct example. Like that. OK? So as soon as it's been across the field, the cell responded vigorously. And so if we then did it systematically, as I've shown here, you can generate an orientation tuning function, it is called. So you can quantitatively establish just how sharply specific a cell like this is to different orientations. So to do that here is an example of that in a second. Here is an example. What you can see here is when you move the ball down, you get a huge response here. And then we get rapidly progressively less of a response. And then you can establish by taking the half height, or whatever point here, what the width is of this function. And then you can plot that for hundreds of cells to establish just how sharply they are tuned, or how different types of cells, say simple and complex, how sharp they are tuned. Now, that was only quite amazing. But then if you look at this figure, I want you guys to be my best detectives. I bet you all of you are best detectives, right? So if you are, you have to tell me what else is in this figure that tells you something more about this cell that is not just orientation. Yes. AUDIENCE: Also, the direction as well. PROFESSOR: A-ha. There we go. Best detective. OK. Is that your middle name? Best detective? AUDIENCE: Sure. PROFESSOR: OK. Very good. So what you see here, if this cell only were orientation selective, you should get another peak here, because you have the same orientation, but moving upward, like this. OK? But you don't have anything like that. So what does that mean? That means that this particular cell is not only orientation specific, but it is also direction specific. Now that's quite something. And the overwhelming majority of cells in the visual cortex in V1 are direction selective, as well, in addition to being orientation selective. And then when you go to other areas, as we'll see in just a minute, there are some areas in which virtually all the cells are direction selective. So direction cell activity becomes an inherent central attribute of visual processing, as revealed by the fact that all the cells, I shouldn't say all, but so many of the cells in the visual cortex have that attribute. OK. So now, what we can do is ask the next question. Now, are there other attributes in the cells of the visual cortex that are selective [INAUDIBLE] some other attribute of the visual scene? So far we have orientation, and we have direction. So another one that has been studied is spatial frequency selectivity. Now, how do you do that? The most common way this had been done is to use sinusoidal gratings. An example of that is shown here. But you could use also [? square-wave ?] gradings, or sinusoidal gratings work extremely well for a variety of reasons. We will talk about that later. And what you do then on each trial, as you move this back and forth across the receptive field in the optimal orientation, you can vary the spatial frequency. You can make it extremely high, or extremely low. This is obviously very low. OK? And if you do that systematically once again, using a computer system that can establish to what degree are cells specific to different spatial frequencies. Everybody understand that? OK. So let me show you what happens. Here is an example of a simple cell, and here's an example of a complex cell. Here we have different spatial frequencies on each line, as specified in here. And it shows that both of these types of cells respond to some in-between level of spatial frequency selectivity, and don't respond at the extremes. So that means that these cortical cells, in addition to being orientation and direction selective, are also spatial frequency selective. And that had lead to many interesting ideas about what is the coding operation in the visual cortex that enables you to see. And that we will discuss some more later. OK. The other important feature here that is worth mentioning is that simple cells respond selectively to each phase, whereas complex cells respond in a more general fashion. OK? All right. So now, we can summarize, and I'm going to add a couple more things to it, of what the major so-called transforms are in the visual cortex. And this is remarkable as you go from the geniculate to the visual cortex, and suddenly there are all these transforms that somehow will enable you to see things better. So the first one of course, we talked about a lot, is orientation. Next one is direction. Then spatial frequency. Those are the three I gave you examples of. But now, one thing I haven't mentioned so far is that you have also a transform in terms of binocularity. As I mentioned to you that already, in the geniculate you have specificity by layers of input from the left and right eyes, which means that those cells in each of those layers are driven only by one eye. OK? They have a monocular. But now when you come up to the cortex, the numerous cells once you get above the major input layers, 4CL from there or below, most of the cells there can be driven binocularly, meaning they get an input from both eyes. And we shall examine what that is. And another thing that we have that you already must have inferred on the basis looking at the receptive fields, that there is a dramatic ON/OFF convergence. In complex terms, it's so complete that it's totally intermingled. So the cell responds to on and off by the way you present it. The simple cells also respond to on and off, but there's a small spatial separation between the on and off sub region. Now in addition, that we will encounter also later, some of the cells, not all of the cells, but some of the cells in the visual cortex get a convergent in both the midget and parasol cells, while others get an input only from one or the other. So that further highlights the extensive increase of complexity, and analysis that is being performed in the visual cortex. OK. So now, to sort of schematize about this, what we can say is the following. We can say here is that we have a cell in primary visual cortex, in V1. OK? And many of these cells, most of them, get an input from both the left and the right eyes, as shown here. They also get an input, many of them, that are convergent from the parasol and midget system. And then the output of these cells, in some fashion or other, should be able to tell us, either from the same cell or from separate cells that we've examined, tell us about luminance, color, orientation, spatial frequency, depth and motion. So those are some of the major tasks a cortical cell performs, so that you can see. So that then is a very general scheme that we are then going to examine in considerably more detail. All right. So now, we take another step forward. The question came up. How are these things in general, these attributes and whatnot, arranged in the visual cortex? Is everything just helter skelter, or is there spatial separation among various attributes? And this is yet another major line of research that Hubel and Wiesel had done, for which they got the Nobel Prize, as well as for their other discoveries. I think they got the Nobel Prize in 1983, I believe it was. OK. So this is called cytoarchitecture. OK? Your ability to understand what the layout is of the functional attributes of the cells in the visual cortex. All right. So let me, first of all, tell you the initial experiment Hubel and Wiesel had done. What they did is they would inject a label into one eye. And then if they waited a week or so, you would get transneuronal transport, meaning that the label would go to the geniculate, and that label would transport to the cells in the geniculate, and then would go up to the visual cortex. And so when they looked at this, put the input into the left eye, what they found was in the cortex of the cat in this case, you see these alternations, you can label them unlabeled regions. And the labels in unlabeled regions are pretty much equally distributed. The assumption therefore is, and that was proven of course, that if you were to label both sides, then you would get a continuous label. These include areas, or if you labeled the other eye, you would get the dark areas labeled. So this established then that we have an orderly arrangement of, if you will, ocular dominance columns. That's what it is called. Ocular dominance columns. Now, this is a cross section. So now, what you can do is do a similar experiment. Unless you're doing a cross section, you can take the brain, and make horizontal cuts across it. Now, the brain is slightly curved, of course, depending on the species how much. And so, when you make these very thin cross sections, they will only tell you about that particular cross section. But then you can put them together. You can take a montage. So let me explain that to you. Here is a single section of the left eye columns that are labeled in the monkey. So in other words to repeat, you inject a label into the left eye. Then that is transported transneuronally to the visual cortex, and this is how it lights up. But now what you can do is you can take each of these sections. There are many sections. I told you the cortex is about 2 millimeters in thickness. OK? So we can take a huge number of sections across, depending on how thick or thin you want to cut it. And then you can label each of these, and then you can superimpose them. And this is what we call a montage. In this case we have five layers that have been superimposed, so you can see it. Now, these columns here, so-called, it's a column because it goes through the thickness of the cortex, are then ocular dominance columns. OK? Ocular dominance columns. We will next examine whether they're also columns for orientation. That's why I said orientation. OK. Anyway, so we have this arrangement. And these often have been called zebra stripes, because it looks like a zebra. And that helps as a mnemonic device to remember this. Now, what you can do is to examine this a bit more systematically. And if you do that, we can draw together a huge section of the visual cortex. This is the fovea again, in this case. This is about 80 degrees out. And so this is a section that was put together in the Hubel Wiesel Laboratory at Harvard, showing you that the thickness of them is constant throughout. And then to understand what they're talking about in terms of size, here we have David Hubel's thumbprint. OK? So if you look at your thumbprint, the spatial frequency here is about twice roughly, twice that you have here. So again, if you look at your thumb like this, it can give you a sense of how unbelievably fine these columns are in the visual cortex. OK? So that's basically the layout. So now, the next big question came up in the work that Hubel and Wiesel originally did, was whether you also had columns for orientation. So how do you do that? Well, another technique has evolved, and everything that you discover in this business heavily depends on you being able to come up with a new technique, or somebody else, and use a technique they invented. And when you do that, almost inevitably you can make a major discovery. But if you're not sensitive to new technologies, then it's unlikely that you're going to make a major new discovery. So you've got to keep your nose to the grindstone, and constantly look, and say what is the latest discovery in many techniques. All right. So if you do that, one of the remarkable techniques that had been developed is to use a substance called 2-deoxyglucose that I'm not going to go into details about it. It's radioactively labeled as well, and what you inject in the bloodstream. And then what happens is that those cells in the brain that are highly active, absorb more of this 2-deoxyglucose than those cells that are inactive. And so if you do that, the experiment you would do to look at the selectivity for orientation, is you take an animal, unless it is paralyzed, and present for an hour vertically oriented set of bars, or signs of gratings that keep moving, and keep moving, and keep moving. And then those cells that are, in this case going this way, have selected the horizontal orientation. Those will take up a lot more of this 2-deoxyglucose, and therefore that would be heavily labeled. So then the question is, what does that look like? And so if you do that kind of experiment, let me skip that, here's an experiment like that, and this happens to be the tree shrew. It shows that in this case, when you use one particular orientation that you get these stripes, which reflect those cells in the visual cortex in the column arrangement that respond best to that orientation. So therefore, if this is horizontal, the ones in between with these cells are vertical. All right? So if you then do this systematically in the monkey, you can see exactly what that looks like. And then you can ask the big question that was posed. What is the relationship between orientation selectivity columns and ocular dominance columns? So that was then done by a set of experiments, also in the Hubel-Wiesel Laboratory. And I'm proud to say that the prime person who did that experiment was one of my former students, who got his PhD in my laboratory, Michael Stryker. And so he did these experiments. And so in a way, the experiment was done to test a hypothesis that was proposed by Huber and Wiesel. And let me interject at this point, that one of the important things to realize is that you do not want to fall in love with your hypotheses, because in most cases the hypothesis that you dream up, so to speak, out of thin air, will end up being wrong when it comes to the brain. So in this case, this also happened. This is the model they came up with. They proposed that you had what we call an ice cube model, which in one direction specifies orientation columns, and in the other direction, ocular dominance columns. So this is the ice cube model. And now that you had this model, this hypothesis, you could test it. And the way this was tested was to use the same procedures I described to you, but do it in the same brain. All right? And so here's an example of here we have a part of the brain, again looking at it from the top down, which had been labeled for both orientation and ocular dominance columns. And then you take that same brain, and you label it for orientation columns. So now, if your hypothesis is correct, let me clarify this, if you say that they are right angles to each other, then you would expect that if you have a bunch of orientation columns, the ocular dominance columns should be at right angles to it. Right? And so if you label this, and then can draw it out, you can see whether or not that hypothesis is correct. And when you did that, here is a real example of what they did. One of these is orientation, and the thick ones and the thin ones is ocular dominant, maybe the other way around. It doesn't matter. But at any rate, if the hypothesis is correct, you'll expect something like what you see here. They're at right angles to each other. OK? But if you look at it carefully, you can see the endless locations where they're not at right angles to each other. All right? And because of that, even though that's a very attractive hypothesis, people have come up with other hypotheses saying that this hypothesis is really questionable. And an alternative hypothesis that had been proposed arose in part by yet another new discovery that had been made. And this discovery was made by a woman called Margaret Wong-Riley. But this particular picture has been done by processing, again this is a horizontal view looking down at the brain, by Marge Livingstone, who is at Harvard, and had done all the collaborative work with Hubel. So what you see here, are the so-called cytochrome oxidase patches. OK? Now, when this was first discovered, people said, my god. They had never seen anything like this before. This is the only kind of stain that showed this up. Let me explain to you. This is also an activity label. The cytochrome oxidase. And we selectively label those cells that are most active, rather than those cells which are inactive. And so this is what these patches look like. And so people kept saying, why do we have this patches? We've never seen them before in the visual cortex. What on earth are they for? What do they tell us about? So all kinds of hypotheses evolved. And people asked the question, since we have this thing, what if we recorded in one of these patches, as opposed to outside of them. And so when this was done by several people-- let me just show you if you have a much larger section. Here again is a fovea that's about 80 degrees out. This is what these patches look like when you do it on the high contrast. OK? So indeed, they're extremely frequent. They're very orderly. And so the question, of course, is whether these batches are relative to the columns that we talked about. And what was discovered, it if you look at the ocular dominance columns, these patches are always in the middle of a column. So in other words, to make that clear, if you have here a column like that, the patches would be like this. OK? And the next one over, again the patches would be like that. OK? They would not be between patches. So there's a direct relationship between the ocular dominance columns, and the layout of these patches. OK? All right. So now, as a result of this work, people have come up with yet, and let me finish, another hypothesis, which is called the radio model. They argued that the cells that are in the center of these patches are largely unoriented, and that the orientation selectivity goes around in a radio fashion, as indicated here in the visual cortex within each of the ocular dominance columns. OK? AUDIENCE: I'm good now. PROFESSOR: Are you good now? AUDIENCE: Thank you. PROFESSOR: All right. So that's the second model that has emerged. And this model also was highly questioned, because the evidence was not that you had these nice clear radio orientations for orientation selectivity. So then finally another technique had emerged, which was carried out again at Harvard, at Harvard Medical School by a fellow called Blaisdell. He developed a technique that actually other people have used, but then he applied it to the visual cortex, that is called optical recording. So you could record optically from the visual cortex straight down, and then it could vary orientation, and ocular dominance. And when you did that, this is what a typical example looks like. OK? Those red lines show what the layout is of the orientation selectivity of the cells, and the patches here are those regions where you don't have orientation selectivity. OK? So this then indicated that you do not have a computer like layout in the visual cortex going one way, and the other way for orientation direction. But you have it a bit messier. And so this particular arrangement then was called, or at least I called it, the swirl model. OK? But actually, to call it a model is incorrect. This is a fact. The others are models. OK? So now to put it all together, here is the Hubel and Wiesel ice-cube model, where the orientation selectivity [INAUDIBLE] right angles to each other. Then we have the radio model. And finally we have the swirl model, which is the way it really is. All right. So that then in essence tells you about the cytoarchitecture of the visual cortex. Now we are going to take another big step forward. We are going to look at extrastriate cortex. As I've mentioned to you before, when you look at the visual cortex, it comes actually in many subdivisions, and maybe as many as 30 visual cortical areas, of which we will only talk about a few of them, because it's totally overwhelming. And these areas were presumed to involve, maybe correctly so, initially at least, increasingly complex visual analyses. All right? Now, one of the prime ideas that, or hypotheses again if you will, to my mind turned out to be incorrect, is that each of these areas in the brain specializes in analyzing a certain aspect of vision. So what are certain aspects of vision? Just a few examples are analyzing color, analyzing shape, analyzing motion, and so on. And so they argue that all these visual areas specialize in one of those. So that was an interesting idea, and so people began to study extrastriate cortex in more detail. And I'm going to say few words about it, but let me go back even further in history. Let's go back to a time in the early 1800s. There were two very famous people there, which were called Gall and Spurzheim. And they came up with the idea which eventually was called phrenology. How many of know phrenology? Oh, well. My goodness. All of you do. OK. So phrenology, in essence, claimed that there are specific areas in the brain, as shown here, that specialize in certain aspects of the information processing. And how did they come up with this? The way they came up with it was to palpate the skull. And wherever there were big bulges, they felt that there was a lot of that attribute that a person had. And if it was small, then the person didn't have too much of it. And so they played with it, and played with it, and they came up in an 1812 publication with essentially 35 basic visual attributes of processing in the brain, for the cortex of the brain in humans. And the interesting thing about this is it gives you a sense of history and how much we have changed in our lives, as to what these specialized areas were conceived to be. Now, to make that a little bit clearer for you, let me stop here for a second. And what I want to do here is I want to enlarge this, so you can see it better. OK. So here are some of these areas. And if you look at these, you will be almost taken aback as to what the hell the names are here. I mean, some of the common names here that you can see would be, for example, let's see if I can put them in the right order here. Actually, the thing was that they called these not only propensities, but they called them sentiments. So in those days sentiments were very important. There's not much brain allocated to sentiments anymore. But at any rate, such things that existed as amativeness, cautiousness, benevolence, veneration, wonder and ideality. Those are just a few examples here that you can spot. Amativeness right there. And indeed you can ask the question, really would that much brain be devoted to amativeness? And certainly that's not the case anymore. So maybe our brains are totally different from the way they were back in 1812. I doubt it. And so therefore, I think these things are rather fanciful, and very far-fetched hypotheses. And as I mentioned before, that's par for the course. So many hypotheses end up being wrong, and the best way to overcome that is to do solid experiments to find out what is really going on. OK. So now, we move on. And we need to understand what the more modern techniques are that enable us to specify things about these higher cortical areas. The first one of these is architectonics. That's obvious what it is. That's simply to look at the brain, and identify the various areas in a systematic fashion. The second one is to look at connections. You do anatomical studies to determine which areas to connect to what areas in what fashion. Another one is topographic mapping. We talked about that already. One of those was to actually do single cell recording systematically moving the electrode across, or to do the two digit type studies. Another one we can do is what we call physiological characterization. That's also known to you. When you looked at the cells in V1, you established that they're orientation direction selective. That's your physiological characterization of the cells in V1. And then you can ask the question, well, what about V2, V4, and so on, these other areas. What are the cells like there? What do they respond to well? And so on. And then, another very important technique is to say, OK, what if we removed a particular area that had been identified. What kind of loss do you have in vision, or in general in the brain? If you move a particular area, what kind of deficit arises? And if that's a specific deficit, you can infer that that particular area plays an important role in the analysis that you can no longer do. That can also be done instead of making specific lesions. It's less accurate, but you can do it by studying humans who have had various kinds of febrile accidents. And that's one of the things that our former chairman, Hans-Lukas Teuber, has done extensively. Studying people after the Second World War who have sustained specific brain injuries to see what kinds of deficits they had suffered on the basis of that you could infer what various brain areas do. And the last one here that I want to mention is imaging, of course. We've talked about that. You can present certain specific stimuli to activate the brain. And then once you process that using magnetic resonance imaging, for example, functional magnetic resonance imaging can tell you how important, for example, areas are in recognising faces. And some of that work is being done, actually here, by Nancy Kanwisher, for example, in our department. OK. So to do this then systematically, you have to have a sort of idea of what kinds of functions do we want to study. And so you want to break it down visual functions. And one way to break them down, I'm not saying this is overall satisfactory to everyone, they can talk about so-called basic visual capacities, and more higher level ones. When you talk about basic visual capacities, they say well how well can you see color. How well can you distinguish differences in brightness? How good are you at seeing basic patterns? Textures? Motion? Depth? OK? So those would be your basic visual functions. And then when you come to intermediate visual capacities, things become much more complex, of course. You come up with constancy. How come that when I look at something that's nearby, and something that's further way, I can recognize it's the same thing? OK? Or how can we select things in the visual scene? How can we recognize things, just like recognizing faces? How can we make transpositions? How can we make comparisons? And how can we locate things in space? So those would be some of those so-called intermediate visual capacities. And I'm not even going to mention high level visual capacities, because they are even more complicated, and we know even less about them. OK. So now, let's go and lay out the visual areas. Just the very basics of it, because it's unbelievably complicated. Here we have a human brain. And back when this was put together, this posterior area here, which is the primary visual cortex, called area 17 then, we now call that V1. And 18 would be V2. And 19, V3, and so on. OK. So now again, we go back to the monkey brain. And when we look at the monkey brain, what you see here is again the central sulcus. You're going to see this over and over again, and it's going to stick into your head. Here's the lunate sulcus. And here is area V1. This is where we had examined the properties of these cells that we had just talked about. Then if you move on, right at the edge here of the lunate, V2 starts, and then goes from under the gyrus. And then in this region here, we have what is called area V4. So those are some of the basic areas. And you are going to see a few things about them. Now, people have studied this extensively, and they came up with frightening diagrams of this. This is a flattened monkey brain. This is V1. And then you have a whole bunch of areas that are following succession here. I'm not going to label many of these. And we talked about V2, V4. But there are many, many more. I'll bring up a few more in a minute. And then, if you look at the interconnections that had been made, it's totally frightening. There's so many hundreds, and hundreds, and hundreds of connections going every which way. So it is very difficult to say that particular area receives inputs only from one other area, or something like that. There's just a tremendous amount of interconnections, indicating that any analysis is likely to take place involving thousands, and tens of thousands of thousands of neurons being active in many, many different brain areas. OK. Now the major cortical visual areas that we shall consider are V1, that we already did, V2, V3, V4 and MT. Then in the temporal region of the brain, we have inferotemporal cortex. And then in the private region, we have the lateral intraparietal area, the ventral area, and MST, which is called the Medial Superior Temporal area. And then lastly, in the frontal lobe, we have the frontal eye fields. Now, we will talk about each of these at various levels in the course. Today, we will just briefly talk about V2, V4 and MT. OK. So now, a couple of general principles that emerge from this kind of work. First of all is that the size of the receptive fields in these different areas changes dramatically. In the visual cortex there's aptitudes that are very small. I mean, bigger than in the retina, or in the geniculate, but still very, very small. But then when you get to V2, they're about three times bigger on the average in diameter. And when you come to V4, they're huge. OK? And that's true throughout, that the specificity of the location of the receptive fields decreases as you progress up into higher cortical areas. Now, there's another very interesting fact that had been discovered, which again nobody hypothesized. Namely, that the way these areas are laid out next to each other is not what you would have thought. So let me give you an example. Suppose what we do is we take an electrode, and record in V1, and then we go in progressive steps across V1, where the receptive fields move out from the fovea into the periphery. And if you do that, here's a receptive field of that cell. [INAUDIBLE], right there. The next one is a little bit further removed. The next one is a little bit further removed. And here's the next one, and that one is there. OK? So what you have is a progression of receptive fields. They get bigger, and they move from the center out. So now the big question is what happens when you now come in to V2. And think about it for a minute. What do you think? Where do you think the receptive field will be when you just ride across from V1 to V2 in the map that I had shown you before? Well, you will be surprised. I think you will be surprised. You will find that the next receptive field, close as this, is at the same location, almost, very close the same location. And of course, the receptive field is about three times bigger. Now, what happens when you now move one over? Well, the receptive field comes back towards the center. And again, and again like that. So these areas connect to each other in reverse, so to speak. In other words, if they didn't one would connect to here. Two would connect to there, and so on. But instead, four and five could have short connections. Three and six, longer, and so on. So we have this very curious arrangement. And to this day I don't have a clue as to why in the course of evolution this happened. I mean, there all kinds of hypotheses. This may take less elaborate wiring. Not as long wiring overall, or something, because if it were the other way, then all these wires would be long. So that could be a reason, but one is not sure. And to my knowledge, no experiment has yet been done to truly explain why we have this curious arrangement. Now, when you look at area V2, we have yet another very interesting factor. When you come back to area V1, this would use V1. We have those famous cytochrome oxidase patches. But then when you get to area V2, instead of what you have, you have these elongated bars. OK? And if you look closely, you can see that there's a tendency for the bars to go from thick to thin with an inter-bar area. And so when this was discovered, people began to record to say why is this different from here. What does it signify? And so when they did that systematically, they came up with a model, which may not be entirely correct. But the claim was that the thin stripes in V2 get inputs predominantly from there patches that you have in V1. And the thick stripes get input from the so-called parasol system, meaning that they would be heavily involved in things like what the parasol cells do, namely play an important role in emotion perception. And the interstripes here called, or the pale stripes sometimes called, they get input in this particular model from the orientation specific cells. OK? So now, once this was done and proposed, people begin to do experiments to record in the thick and thin stripes of V2 to see what the properties are of the cells there, recognizing of course that the size of the receptive fields is uniformly about three times bigger than in V1. So when they did that, they looked at orientation. And they found that in V2, most of the cells in the thick stripes are orientation specific, but also many in the thin stripes, and many also in the pale stripes. So there didn't seem to be a huge distinction. Then when it came to end stopping, I didn't talk about that. Let me mention what end stopping means. If you take a bar of light, and you move it across a receptive field, you get a vigorous response when it's a fairly short bar. But then when you make the bar a lot longer, you get less of a response, because of some surrounding [INAUDIBLE]. So that's what's called, by Hubel and Wiesel, end stopping. And so that attribute is one that is again fairly similar in the three areas. When you come to color, in the thin stripes there is much more color sensitivity than the fixed stripes. When it comes to direction, again not that much difference. Maybe more in the thick stripes. Disparity, more in thickness. Disparity refers to depth perception, as we have talked about. So the prime message here is that you do not have a complete clear separation of function in those three stripes that had been identified in V2, virtually inputs from V1. So now, we are going to move on to V4. And when you talk about area of V4, we'll talk about that again in more detail later on. We have a huge increase of complexity of the response properties of cells. And here is area V4, as you can see. A lot of recording has been done in this area. Hundreds of papers have been published. And I can conclude on the basis of that. First of all, here the receptive fields are even a lot bigger than in V2. And the receptive field properties are far more complex than in either V1 or V2. And the response properties are dynamic. If the monkey is paying attention, or is looking for something, the cells will find a lot more than when he's not looking for something. And yet another discovery was made that the amount of activity is also modulated by how you move your eyes as you're looking around in the visual scene. And then, an important conclusion from that is that this is not just a color area. And the reason I mention that is because a good many years ago an experiment was done in England. I shan't name the person who was doing it, who claimed that this is a color area. Now, one of the reasons he made that claim was that he removed area V4, or studied humans who lacked area V4. And he found that they had difficulties in telling colors. Well, that was nice. But the problem was, and this is a good lesson for all of us, that the only thing he tested was for color, because that was his hypothesis. Had he tested for a multitude of other functions, he would have found that area V4 has much more significant deficits when it's removed for many other visual functions than for color. So again, we'll talk about that input with more detail later on when we talk about very specific processing of digital attributes. Now then the next areas I want to deal with, the last areas we are going to discuss, are going to be areas MT and MST. MT stands for Middle Temporal area. And MST stands for Middle Superior Temporal area. And again, to look at the brain, what you can see here is a superior temporal sulcus. And if you went into it, it's about 13 millimeters deep. On the posterior side, we have area MT. And on the interior part of it we have area MST. Now, this is a remarkable area. An incredible amount of work had been done on it. And the major discovery that had been made is that the cells in MT and MST respond predominantly to direction. Virtually, all the cells are direction specific. And also, this is in part believed to be due to the fact that this area gets half their input from the parasol system. So to look at this in more detail, here is a receptive field, quite large and empty. And if you move a bar of light across in this direction, you get a huge response. This is the cumulative response histogram. But if you move it in the opposite direction, you won't actually get an inhibition. Tremendous direction specificity. And you get the same in MST, but there the receptive fields are just gigantic. But again, direction specificity is just as specific as it is in MT. So these two areas play a central role in motion analysis, as we shall see in more detail later on. All right. So now, if you look at the spatial layout of this, what you do is you can move an electrode into the area. And so you move it across the represented area by going into the sulcus. And what you can see here is it's a systematic progression of directions as you move the electrodes. Here the distance is expressed in micrometers. OK? Now, if you map that out you can create a layout of the area, area MT. And what you can see is there's a systematic kilometer arrangement, or different direction specificities. So this is a general principle, by the way, of the way the cortex is organized. It first actually was discovered by Mountcastle in the somatosensory system. OK? Showing that there's a kilometer arrangement there, and then subsequently it was shown that this is also true for many other areas, including vision and audition. All right. So now, the last thing I want to mention very briefly, we'll get back to it, is so-called inferotemporal cortex. And here once again, we have a map of the monkey cortex. And this down here is a temporal area here. And this is where the inferotemporal cortex resides. And this has been studied extensively by people like Charlie Gross, and more recently with imaging techniques by people in the department here. It was shown that this area has a lot to do with object recognition, face recognition. A very complex high level area. OK. So now, I want to summarize what I have covered today. First of all, the contralateral visual hemifield is laid out topographically in V1 in each hemisphere. You know that by now, cold. Secondly, the major transforms in V1 are orientation, direction, spatial frequency, selectivity, binocularity. You have an ON/OFF convergence, and you have also in many cells a convergence of the midget and parasol systems. Then V1 is organized in a modular fashion. Another way to put it is that you have a kilometer organization, and that we talked about three models that had been proposed. One of them, the original one by Hubel and Wiesel, is the ice cube model. Then we had the radio model, and the swirl model. And I pointed out to you that the swirl model is not really a model. It's a fact. That's how it's laid out. OK. And then I said also that there are more than 30 visual areas that make more than 300 interconnections. Extrastriate areas do not specialize in any one single function, contrary to what had been very, very popular, maybe even 10 or 15 years ago. The receptive field's size in neurons increases greatly in progressively higher visual areas. That is a very highly solid fact. Then, area MT is involved in the analysis of motion. And as we shall see later on, it also contributes to perception of depth, and to flickering stimuli. Area V4 engages in many aspects of visual analysis, and neurons have dynamic properties. Attention and eye movements modulate the way those cells respond. And then in inferotemporal cortex, high level visual analysis takes place that includes object recognition, and therefore also the recognition of faces. So that then summarizes what I had to say. And to conclude in general, we can say that the cells in the cortex have one very important increase in their mode of functioning compared to lower areas, in that the cells are multifunctional. So any one cell can tell you information already in V1 about direction, as well as orientation and spatial frequency. So if one cell can do many, many different things then that's a good thing, because if each cell in the brain specialized in one thing, it used to be the hypothesis that was called your famous grandmother cell. That there was a cell in the brain that represented your grandmother. Now, if that were the case, that individual cells were future selective, you would need a brain as big as this room to accommodate the ability to what you can do that you can do anyway. So cells in the brain are multifunctional. They carry out many, many different analyses, just like when computers do complex mathematical analyses. And because of that it's very difficult, of course, especially when you study with high visual areas, to learn just how these cells function, because to understand that you would need to record at the same time from virtually all the cells in an area to see what each of them does, from which you can derive what the actual analysis is. So indeed, we have a very complex task ahead of us in trying to understand how this very complex interaction among neurons eventually results in your ability to analyze various aspects of the visual scene. And that's true not only for vision. It's true for many other areas. In vision it's very, very complex, because you have all these incredible number of cells. As I mentioned before, you have more than a million retinal ganglion cells that project from the one eye, from each eye, into the brain. And then this multiplies in these higher visual areas. Then you have millions and millions, billions of cells to perform all of these analyses. Now by contrast, when you look at the auditory system, and I'm not saying this to belittle it, I'm just saying it's probably easier to understand it, because in the auditory system the fibers that project from the cochlear nucleus amount to about 30,000 in each cochlear to the central nervous system. So we're talking about 20, 30-fold higher level of number of neurons involved in the visual system. And that's of course because vision for us is just a very, very important attribute, perhaps more so than many other attributes that we have in the nervous system. OK. So that's what I have to cover today. And then next time, you're going to sort of move back. We're going to talk about the on and off channels to try to understand why on earth did they evolve. OK?
MIT_904_Sensory_Systems_Fall_2013
2_Basic_layout_of_the_visual_system_and_the_retina.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. The first thing I would like to tell you, which I did not mention the last time-- that in the reading assignment, the syllabus that we handed out-- and by the way, does everybody have one? If you don't, we have some extras here. You all have one? Good. Anyway, as you can see on the first page, there is a section called preparatory reading. So I would recommend that if you can each time read this recommended reading, which is available, of course, on the internet. And that will make it easier for you to follow the lecture, because this course is both in vision and audition, and relies on imparting a great many facts about the workings of these two systems. And it's easier to be able to memorize these facts by doing the preparatory reading, if you can. All right, so today, then, what we are going to do is initially start to talk about the layout of the visual system. And then we're going to talk about the retina for much of the course. At the very end, we'll talk a little bit about the lateral geniculate nucleus. In each case, I will try to tell you about how some people thought about things, and what the progression is of the discoveries that have been made about the visual system. It is certainly an area of research which has been incredibly successful, and has resulted in quite a number of Nobel Prizes for the discoveries that individuals had made in uncovering the workings of the system. Furthermore, many of the uncoverings that investigators had done came about as incredible surprises, because so many of the findings were quite unexpected. And I will try to point these things out during the course. And my section, as I mentioned before, will consist of 12 lectures. And then that will be followed on October 23rd by a midterm exam. And again, to reiterate, that's going to consist of a series of multiple-choice questions. And then following that, we'll move on to talk about the auditory system that Chris Brown is going to be presenting to you. All right, so the first thing I want to do, then, is to talk about the basic wiring of the visual system, which in itself has yielded a number of unexpected surprises, and also raised some interesting questions. So what I want to start with first of all is that when you look at some animals that have sideways-looking eyes, like the rabbit and most amphibians, the two eyes look out in two different directions-- to the left and to the right. And that enables them to see a very large portion of the visual field. And when that is the case, as in this picture shown here for the rabbit, what you find is that there's only a small area which is seen by both eyes. But then when we look at higher mammals, and particularly primates and humans, what you find is that there was a huge change in the course of evolution, bringing the eyes to the front. Now that change is very interesting, and we can ask, why on earth did that happen? Because it meant that, obviously, a human cannot see behind themselves, where these rabbits can see a much larger portion of the visual field. And so you sacrificed ability to see all the way around the world for being able to see with both eyes to the front. And so the question is, why did this happen? Now we are going to discuss this in some detail, because one of the prime reasons-- I believe at least-- that this has happened is to be able to process information about depth better than it can be done by rabbits and other animals. And so when we talk about depth perception, we are going to deal with this issue in more detail. Now if you look at an animal like a rabbit, what you find-- and also animals which are, as I've mentioned already, that you also have in fish and amphibians, you have these sideways looking eyes. And now we can ask the question, how do these two eyes make their connections through the ganglion cells to the central nervous system? And that, in fact, was quite an issue at one time, believe it or not, before anatomical techniques became more sophisticated. And the reason it became an issue is because it was discovered predominantly, actually, by Cajal, who I've mentioned had received the Nobel Prize for the beautiful work he had done on not only vision, but in general about the nervous system, in 1906 with Golgi, he discovered, as did several other investigators, that the optic nerve crosses over at the so-called chiasm. And so the input from the left eye projects into the right half of the brain, and from the right half to the left half of the brain. And that became a huge issue-- why on earth would this happen? And we are going to deal with this repeatedly. It's a very interesting issue. And even more interesting is the fact that when the eyes move to the front, what happened is that this kind of connection became much more complicated. But before I do that, let me just say a few more words here. First of all, that in fish and amphibians, you have a huge structure called the optic tectum, which is the primary visual processing center in the brain. And there is also the lateral geniculate nucleus of the thalamus in these animals. But that's quite small, and not very well developed. And there is a cortex in these animals, but it is, again, a primitive area compared to primates, for example. And the lateral geniculate nucleus projects to the cortex. So the left cortex in these animals gets input from the right eye, and the obverse is the case for the right cortex. And so consequently, this cortex is this part of the visual field, and this cortex is this part of the visual field. It's almost like an inversion. Now we come to what happened as a result of the eyes moving to the front. When the eyes move to the front in the course of evolution, that necessitated a major rewiring in the visual system. So that's what we are going to look at next, because that's us. All right, so what we have here then, we have the two eyes looking to the front. And imagine that these two eyes are looking at something. There's a dot here. And both eyes are looking at the same dot, so that you only see a single dot. OK. So now what happens is really something surprising. What happens is that the-- let me interrupt for a second and tell you that imagine that you cut your eye-- I mean, don't do it-- but imagine that you cut your eye vertically in half, each eye, OK? And that means that each eye has, if you will, a nasal half and a temporal half, all right? So here we have the two eyes. Here's the left eye. And what we have is a nasal half, projects across, and the same thing is true for the right eye-- projects across. But now if you look at the temporal hemiretinae-- hemiretinae, remember that's the word for it-- something very interesting happens. The temporal hemiretinae don't cross over, but they remain on the same side. So we have a color code here that the red parts project across, and the black parts also project across from one eye, but stay ipsilateral for the other eye. So you say, my god, why did this happen? What on earth is going on? And then if you study this, you realize that this is actually one of the few truly logical things that you encounter when you study the brain. Many of the things you study in the brain don't seem to be too logical, because in the force of evolution, you have to change things in very peculiar, subtle ways to make things work. You couldn't just redo the whole system from scratch. But this is certainly a logical one, and I'll explain it to you in just a minute. OK, so now what we do next is that again, we have the two lateral geniculate nuclei, but now these are in the thalamus. These two nuclei have grown tremendously in size, and became much more important in visual processing, as did, then, the cortex. This process, by the way, is called encephalization. So in primitive animals, they have a very small cortex. And then the course of evolution, the cortex grew, and grew, and grew, and grew, and became a more and more important structure in being able to analyze just about anything, and certainly analyze vision. Now here we have a cortical area in the posterior part of the brain, as I pointed out to you the last time. And what you have is a whole series of visual areas in the cortex. The most central one, I guess, would be area V1-- the so-called primary visual cortex to which the lateral geniculate projects most profusely. Then we have a whole bunch of other visual areas, as we'll discuss in more detail later on. And then what happens is that we have several other cortical areas. The cortex-- I think you all of you know that-- is divided into four lobes, OK? The frontal, temporal, parietal, and the rear one is called the occipital lobes. All right, now the visual areas that you see in the back here make extensive interconnections and connections to these other lobes in the brain to be able to analyze the visual scene. Now in addition, what we have still-- as we had in more primitive animals-- we have a superior colliculus-- actually two of them, one on each side-- and then we have another set of areas called the terminal nuclei, the nucleus of the optic tract, about which we will just talk a little bit, because they are specializing in certain visual functions, but not nearly as compelling and interesting as the work that has been done in the cortical areas and in the superior colliculi. So this is the arrangement. But now we want to understand, why do you have this strange connection here? So to understand that, what we are going to talk about is the so-called horopter or the Vieth-Muller circle. Now what is that? Some clever guy came up with this observation. They made a circle that goes through the nodal point of the eye. And its diameter depends on where you're fixating. So fixating here, in this case, the red area, which is your left visual hemifield, impinges on the nasal retina of the left eye and the temporal retina of the other eye. So if you have an object here, that hits corresponding retinal elements in the two eyes. And that is a basic rule. Anywhere along this horopter, a spot will activate corresponding regions in the two eyes. And then what happens is that those corresponding points go to the same location in the lateral geniculate nucleus, and in the superior colliculus, and in the visual cortex. So the logic of this arrangement is to be able to have the two eyes have input to corresponding points in the brain. That way, you can see something uniform and clear, rather than seeing double. So that is, then, the arrangement that you have in primates. Now just to anticipate a little bit-- whenever images fall inside or outside the circle, they do fall on non-correspondent points, and that will have something to do. And I keep you just curious about that for the time being, about how we process depth. And that is a mechanism that is involved, so it's called stereopsis. I've mentioned that very briefly the last time. OK, so that then summarises the basic connections of the system. And then what we are going to do next is we're going to go to the retina, and look at in more detail about how the retina is made. And then we will go in successive steps to higher areas, sometimes going back and forth, to try to understand visual processing in general. All right, so now let's begin by looking at the eye. Now I don't expect you to know all these names here, but you should remember, of course, the lens, the cornea. You know that already. You know the iris, right? What's the iris? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, now what do we base our views on the color of the eye when you say somebody has blue eyes, or somebody has brown eyes? It's based on the fact of what the coloring is of the iris, all right? Now, what's the iris? The iris can get bigger and smaller to control the size of the opening into the lens. All right? It's much like what you have in cameras, where they have the so-called f-stop, right? So if you have a large number, say f/16 on a camera, that is set to 16 if you do it manually when it's bright out there. But when it's dim out there, like we have in this camera here, we have the lens wide open because it's pretty dark in here. Then you have a low f-stop, like an f/4 or even lower, to allow more light to come in-- more photons to enter the eye. Now next is-- here's the lens. And the fascinating thing about the lens is that in the eye, the lens is totally different from the way it is in a camera. Now, maybe by now many of you don't even know the details about a camera, but one of the details, of course, is that to bring an image into focus, you have to adjust the distance of the lens from the film onto which the image projects, OK? So the closer an image, the further out the lens has to go. Now, if that were the case in animals and humans, you would have a super-bulging eye, so when you look at something closely, the eye would go out like that. And that would be really disturbing. So that is not the case, because what happened instead is something very clever that you don't see in cameras, which is that you have this lens here, which is-- at least in younger people-- is an organ that can vary in thickness. So you have here some muscles, the ciliary muscles, which are just the thickness of this lens, to allow it to focus properly on the retina. Now here, the retina goes around, is against the inner wall of the eye. You all know that, of course. And to explain how this is done, then-- very, very crudely-- here we have it. If you have an object really near, you want the lens to be thick to properly focus the image on the retina. And if the image is far, you have to thin the lens to get the proper focus on the retina. So that's how the lens works, at least when you're reasonably young. But then when you get older, what happens is that the lens becomes progressively stiffer and stiffer. And because of that, you have to start wearing glasses. Now, in the olden days, they used to have lenses which came in two parts-- bifocals. So if you looked in the upper part, you could look at a distance. If you looked in the lower part, you could look close. Nowadays, they have graduated lenses, so if you look in the upper part, you can look far. And if you want to look [INAUDIBLE] look down below, and then you can see that in focus. So once you get older, these lenses don't adjust as much-- still a little bit. But the proof that this is not that important, thank god, now that we have these graded lenses and we have bifocals-- is that a lot of people in old age, who develop-- and sometimes not even in old age-- in India, for example, there are a lot of young kids who have cataracts. Now what are cataracts? Cataracts interfere with the processing of vision through the lens system, and things become unclear, because all kinds of blockage occurs in the lens itself. So how is that corrected? The way that's corrected nowadays, is that you have what is called cataract surgery. So what they do, then, is they actually literally remove the lens from the eye, and they put in a fixed lens made out of plastic or glass. And then, obviously, since there's no adjustment, you do have to wear glasses to look at things far away and look at things close. And most of the time, those would be graduated lenses. And so people who have cataract surgery can see extremely well, certainly a lot better than having to try to deal with lenses that have cataracts. All right. Now there's one other point I want to make about the eye here before we move on-- maybe two points. One is about this part of the eye-- that you have here your so-called iris, OK? And what do you call what's in the center of the iris? AUDIENCE: Pupil. PROFESSOR: Pupil, very good. Now what is the color of the pupil? In most people, it's black, right? Why is it black? How come the pupil is black? Any thoughts about that? OK, well, let me tell you. And I think what I'll do is actually I'll move on. First I'll leave this unanswered for just a few minutes. This I've shown you before, which is another amazing thing about the eye-- namely that in the fovea, you have something like 200,000 cones per square millimeter-- 200,000. As you go five degrees out, it reduces tenfold to 20,000. And then you go 10 degrees, it reduces by half. So because of this very high density-- I've mentioned this before-- in the fovea, you have high acuity. And that is one of the prime reasons you have to move your eye around so that you can see things in fine detail. All right, so now let's take another step here, and look at the retina itself. And let me first of all point out to you, I have reversed the image to have it right-side up. The light actually comes in from the bottom, and goes through all these cells until it hits the photoreceptors. Here are the photoreceptors. And at the closest point to eye, what we have is called the pigment epithelium. OK, so what's the pigment epithelium? The pigment epithelium is a single layer of cells that are pigmented to such a degree that when photons come into the eye and get to the pigment epithelium, they get absorbed just like a black surface, that you see as a black surface, absorbs the photons and doesn't reflect them. So because of that, there's no reflection back out. And that explains why the pupil looks black, because the light that comes into the eye doesn't reflect back out. Now there's a good reason for that, because if it were to reflect back out, it would scatter, and it would activate many more photoreceptors receptors than it should, and because of that, you get blurred vision. Now how do we know this? Can you think of any one species of animal where the pigment epithelium is not black? AUDIENCE: In cats. PROFESSOR: OK, in cat, OK. So there's some animals, you see mostly in animals that specialize in nocturnal vision, that have a pigment epithelium which actually has purposefully reflecting molecules in it. For them, this is called, actually, the tapetum. And the purpose of that is to improve the ability to see so that the photons activate the photoreceptors not only in one direction, but also when they bounce back. Now the tapetum is arranged in such a way, in most animals, that it doesn't freely scatter the light, but scatters in a fairly directional manner. And because of that, you can see pretty well. Now I'm sure that many of you have noticed that when you drive at night, in a the deserted area in the United States-- which are decreasing, of course-- you can see sometimes deer on the highway, or rabbits for that matter. And when you see them, it looks like they have two flashlights projecting at you. It's almost scary. But it also helps you by being able to avoid hitting them. Now that's because they have that reflecting tapetum, all right? Let's look at a different example. What other human can you think of who does not have a black pigment epithelium or for that matter, some animals as well. Yes? AUDIENCE: Albino. PROFESSOR: Very good-- albinos. Now what's albinos? Albinos are individuals, or animals, who lack pigment in their skin, and, of course, in their retinae. So if you look at the eye of an albino, actually their pupil is not black. Their pupil is-- AUDIENCE: Red. PROFESSOR: Reddish, very good. Why is it reddish? Because you have thousands of-- in the retina, especially mostly on the outside here as well, you have blood vessels. And the blood vessels, since they carry blood, when light reflects from them, they will have a reddish color to them, and thereby endow, if you will, the pupil with a reddish color. The pupil with a reddish color. Now, the proof that the main function of the pigment epithelium is to absorb the incoming light, to prevent scattering, is in the fact that albinos that lack this pigment epithelium have very poor vision. They often have vision to 20 over 200 or something, because the light scatters from the epithelium which is no longer pigmented in these people, and activates many photoreceptors, thereby reducing the ability to see things clearly and sharply. All right, so now let's go through this retina here. First of all, we have the rods and the cones. Now this brings me to a very interesting story-- at least for me an interesting story. Back in the 19th century, there was a person whose name was Max Schultze. I'm trying to remember when he lived. He lived from 1825 to 1874. He developed some new anatomical procedures. And as I mentioned before, whenever new techniques emerge or are invented, it's likely that you're going to make some interesting new discoveries. And so he was the first person, by then looking at the retina and looking at the photoreceptors, that he said there are two different kinds of photoreceptors-- the rods and the cones. The rods are sort of, like the name implies, rods. And the cones are sort of, as the name implies, conish looking. So he looked at that and looked at that, and said why on earth would we have in most animals rods and cones? What is their function? This was a totally new discovery. When he first published this, nobody believed him. They said, oh, they just vary randomly, and some are more rod-like and some are more cone-like, but it's just the same photoreceptors. So he, of course, did not accept that. He saw, by being very careful about it, that it was definitely two different classes just like it is in this picture. And so he began to ask the question, why on earth do we have this? So imagine yourself in the 19th century coming up with this incredible discovery, and trying to figure out why we have them. Now if you're a really good scientist, then you start to think analytically. And so what did he do? I'm so impressed to this day. He said, well, let me take a closer look at the retina and see how the rods and the cones are distributed on the retinal surface. And he discovered, to go back to the previous slide, that there is a small region here, the fovea, where there are only cones. There are no rods. And I'll show you a better picture of that in a minute. And he said, my god, if that's the case, there must be something different about the way we see in the fovea than we can see in areas where you have lots of rods. So what did he discover? He discovered that in the fovea we don't see too well at night. OK, on the basis of that, he concluded that the rods are for night vision. Now this was absolutely incredible. There's only one sad part in this story, which is that this discovery was made in the early 19th century, but before the Nobel Prize was initiated, because had the Nobel Prize been initiated before you made this discovery, there's no question that he would have received the Nobel Prize for it. It's such an incredible magnitude of discovery. All right, so let's go back here then, and I pointed out the rods and the cones and the pigment epithelium. Now let's go through here and I'll tell you about the rest of the elements. And then this time, and also in several subsequent sessions, we'll talk about some of these elements in much more detail, especially about the retinal ganglion cells. So anyway, when you come here to the next stage here, we have the so-called horizontal cells and bipolar cells. It's been established that the photoreceptors connect with both of these. And then the bipolar cells connect into the inner portion of the retina, just as I'll show you in a minute. Now this region here is called the OPL, and it's easy to remember-- outer plexiform layer. Now then, when you get into the deeper parts of the retina, you have the so-called amacrine cells and ganglion cells, and they reside in the inner plexiform layer. Now it is the ganglion cells that project out of the eye and then project, as we had shown you, to various regions in the brain, most notably that we're going to talk about today the lateral geniculate nucleus. Now let me also point out at this stage to you some numbers. Here is an enlarged, highly simplified view of rods. Now, the rods are constructed in an incredibly complex way. They have these individual so-called disks. And within each of these disks-- there are about 1,000 of them-- I only have 25, here, I think, or 24. There are about 1,000 of these in each rod, OK? And within each of these disks, you have about 10,000 rhodopsin molecules. There are several kinds of opsins, so-called. Rhodopsins are the ones you see in rods, in there are different kinds of opsins in cones. And these are the photosensitive molecules in your receptors, so that when a photon comes in and hits one of these molecules, they change shape. And the simplest way of thinking about this is to say that these molecules, these rhodopsin molecules, come in two different shapes. And these two different shapes are what we call, very simply, those that are open and those are the sort of closed, if you're describing the molecular shape. And the easiest way to think of them as that they're bleached or unbleached. And so there are millions of these molecules, and what is that, 10 million in each rod. And you know how many rods there are in the average eye? About 120 million, and about five or six million cones. And you're talking about unbelievable numbers. It's just hard to comprehend. But at any rate, the molecules here, then, are two different states-- bleached and unbleached. What that means is when a molecule shifts from being unbleached to bleached as a result of a photon hitting it, that changes the overall-- I'm talking about, of course, hundreds of molecules, typically-- the overall degree of de- and hyper-polarization of the cell. And that will then determine whether the cell is going to communicate to the next elements below, and I'll describe that in much more detail in just a minute. But it gives you an idea of how unbelievably complex this is. Yet another complexity is that you have this huge number of rods, but if you think about it, will these rods be there in each in these rods with each of these disks persist for a lifetime? So the average lifetime is 80 or 90 years. Well these same guys exist in the eye all the time? Well, the answer is no. What happens is there is yet another process, amazingly, that replaces in steps these disks in the rods, usually a few disks every few days. So it's a constant, dynamic process keeping your rods, if not your whole body, young, so to speak. All right, so anyway, this gives you sort of a crude sense of the complexity of this system. But now we are going to shift over, and talk about the retinal ganglion cells. And then we'll come back, and we talk in a bit more detail about the preretinal ganglionic elements. Here is a set of pictures created by a wonderful scientist called [? Poyak ?], showing you different classes of types, if you will, of retinal ganglion cells. The top row here have been named, because of their small size and the small dendritic arbors, have been named midget cells. Whereas these ones below-- some of them, at least-- are much, much bigger and have much larger dendritic arbors, as you can see here, have been called parasol cells. And why they've been called parasol cells I'll explain to you in just a minute. Now this set of pictures was created by the Golgi stain that I mentioned to you before, that was invented by Golgi, and extensively studied by Cajal. And for this incredible work they had done, a lot of it on the retina, they received the Nobel Prize in 1906. So what this disclosed, then-- an initial view-- was that there are different classes of retinal ganglion cells. And so therefore, the big question came up, why do we have different classes of cells like that, retinal ganglion cells? What do they do? Well, initially some people again-- which is good-- I think one should be resistant to change and new ideas until they are really proven well. Some people argue that [INAUDIBLE] just continue, but then it became evident that these are indeed different classes. And I will present some evidence to prove that. So there are actually quite a number of different classes of retinal ganglion cells that perform different jobs. We're going to try to understand what are the different jobs that they perform. And we're going to concentrate mostly on these midget and parasol cells, because they are by far the most numerous in the retina, and make a major, major contribution to overall vision. All right, so now let's look at the physiology of retinal ganglion cells. Now that we know we have these different kinds of ganglion cells, how do we find out what they do? Well what was recognized in the early portions of the 20th century was that to find out what these cells do, one needs to study their activity. And to study their activity, what was developed after many different trials, is to either record from individual axons, or to use a microelectrode that you could put into the eye, and record from individual cells, and see what they do. Well, the first person who did this was Keffer Hartline. And for the beautiful work he had done, he received the Nobel Prize in 1967. Now what he did-- he was a tremendous surgeon. And so what he did-- he did this in primitive animals like frogs-- he would take the optic nerve, and he would dissect one fiber from the optic nerve, hook it up, and record electrically from it. And then what he would do, he would shine a light into the eye, move it around until this cell generated action potentials. That was just a very small region, because as I've mentioned, each cell in the retina-- each ganglion cell-- only sees a teeny, teeny, little portion of the visual field. So that's how he did his experiments. And he made some really remarkable discoveries which further then were elaborated by using slightly different methods. A slightly different method, first of all, was to put a microelectrode into the eye. And I've mentioned that to you before. The initial microelectrode was a very fine, tube of glass which was heated and then pulled until the tip was only about a micron in diameter, which enabled you to record from individual cells in the retina, or for that matter, anywhere in the brain. Now another change in the methods that emerged is that instead of shining a light directly into the eye, people use reflected light, because in the world most of what we see is reflected light. Of course, nowadays this has changed a little bit, because we have computers and we have TV, which were set up in such a way as to mimic reflected light, if you will. But in the natural world for the most part, we see reflected light. All right, so when this was done-- this initial work that was done by Keffer Hartline for which he received the Nobel Prize-- that he discovered three major classes of cells based on that recording of this, which he called, first of all, on cells, secondly off cells, and then thirdly on-off cells. Now the on cells were those which, when you shown light into their so-called receptive field area, they would discharge vigorously. You turn the light on here, and you can see the on cell fires. An off cell, on the other hand, fired when you turned off the light using this particular method. And on-off cells were those that fired both at the onset and the termination of the light. And that's why they were called, and are still called to this day, on and off cells. So that was an incredible discovery, and has created a major evolution in the study of the visual system. All right, so now doing this kind of work in more detail, using especially mostly reflected light, several other major discoveries were made. A fellow at Harvard in his later years, originally at Johns Hopkins, did experiments carefully studying the receptive field structure of these cells. He would record, in this case with microelectrodes, from a single cell-- retinal ganglion cell-- and then see how it responded when he fiddled around with the light in the receptive field. So imagine, then, that the eye is fixed. It's looking out. You're putting an electrode, in this case, into the eye or wherever you put it, and then you can use lights, maybe like a projector, move it around to find out where the receptive field is. And then you can present a small spot of light, make it a big spot, make it different colors, and so on, and so on. So when this was done, a remarkable discovery was made by Kuffler, namely that the so-called on and off cells were not just responding to the onset and termination of light. Instead, what they did was they responded vigorously to a small spot in the center of the so-called receptive field, but if you used a large spot like that, there was much less of a response. And that was true also for the off cells, and for the on-off cells. So the surround you can think of as being inhibitory with respect to the center. And so the big question arose, why on earth did this complex arrangement evolve of having cells that have not only an excitatory center, but also have an inhibitory surround? And now I'm going to let you think about this for a minute, and eventually I will tell you as to why we have this organization. Because we're going to devote a whole session to the on and off systems, and also a complete session to the midget and parasol cells. And that's when we are going to discuss this in detail. Now, the important thing as you learn about these things is to be an active thinker. And if something comes up, you say, well, why did this happen? Why did that happen? Why is this? Why is that? When you actively think about that, then it becomes interesting. And also, once the answers come, it becomes insightful and exciting to understand it. OK, so now a series of investigators did experiments in which they-- this sounds like a fairly simple experiment-- in which they did labeling of just the cell bodies in the retina, and did what is called a whole mounts. What's a whole mount? You take the retina. You flatten it out. Then you look at this whole layout through a microscope. And if you label these cells-- and this was a so-called Nissl stain that stains the cell bodies, but doesn't stain the processes like the dendrite of the axons. What they found was-- they did this quantitatively-- they could distinguish three very distinct different classes of cells. It turned out later that there were more than three, but in this case, just doing it crudely like that, you can see those big cells-- that one, that one, that one, that one, and so on-- maybe one out of seven or eight in this sample-- are these huge cells. And then you have some smaller cells. And then you also some tiny cells. And if you did this quantitatively, they argued that they formed distinct separate populations. They were not a continuum. All right, so then that, of course, is a basic requirement to prove that indeed, these are different classes of cells that probably perform different jobs. So what can you do to prove this further? Well, subsequent to this kind of work, people began to look at another interesting question about how these retinal ganglion cells send a signal to the central nervous system. And it's been discovered, not just in the retina, but in many other parts of the human and animal body, that the rapidity with which an axon can send its action potentials down from one location to the next-- from the cell body, say, to the axon terminals-- very much depends on the size of the cell and the size of the axon. So if you have a small cell and a very thin axon, the conduction is slower than if you have a big cell and a big axon. So if that's the case, people argued that if you were to take the optic nerve and stimulate it at one side and a little bit down, maybe back towards the retina, record and see what kind of overall general-- not action potentials, but overall activation takes place. And when that was done, a very amazing discovery was made. In this case, you can see we have two major dips here. This is time, OK. Here is electrical stimulation. Here's the activation of hundreds and hundreds of fibers. And it shows that they formed distinct groups-- a very rapidly conducting group, a slower conducting group, and a very slow, more diffuse group. So then it was subsequent-- I'll talk about that later-- it was subsequently established that these very rapidly conducting axons come from the so-called parasol cells. The medium conducting ones come from the midget cells, all right? I'll tell you exactly how that was established, but now that has become a solid fact. OK, so now we go back to the anatomy of these cell types, if you look at them at equivalent distances, over eccentricity, one, three, and 5.7 degrees, here are examples of the midget cell dendritic orbits [INAUDIBLE] looking straight down at the cell. You can see these are very constricted dendritic orbits. Now by contrast, the parasol cells have much, much larger-- three times larger in diameter-- dendritic orbits. And now I can tell you-- which I deferred before-- is that if you look at these, the reason they're called parasol cells is what? Because it looks like an umbrella. OK. So this looks like an umbrella, and of course that's parasol. So that's why they are called parasol cells. And these guys, much less interesting, are called midget cells because they're midget. They're small. All right, so that then established-- this work was done just 20 years ago-- it was established that you have indeed these two very, very distinct populations of cells. So now the big question came up, why do we have on and off cells? Why do we have midget and parasol cells? And as I say, we are going to discuss this in considerable detail. But let me just give you a crude introduction. If you look at the parasol cell-- obviously, because of the huge dendritic orbit-- they have large, relatively speaking, much larger receptive fields-- three times bigger than midget cells. The midget cells are so specific that each cell in central retina gets input for the excitatory center from just one single cone. Now what does that mean? What I didn't emphasize before, but I'm sure you all know, that we have three major classes of cones, which some people call red, green, and blue. And then when you say that, the real aficionados say, come on, you can't say that. You're supposed to say short, medium, and long wavelength sensitive neurons. But we'll just call them red, green, and blue. Now the reason, then, is that the center of one class of midget cells gets input only from red cones, another from green cones, and some from blue cones. I'll show that in more detail in a minute. So we have these three subtypes. And so therefore, you can say that the midget cells, because they're so teeny, they should be able to see fine detail. And because of this wiring, they should be able to tell you about color. By contrast, if you look at the parasol cells, they get a mix of inputs, both in the center and their surround, so it's unlikely that they can tell you about color. And we'll examine that's in much more detail soon. All right, so now the other interesting thing about this system is it makes you speculate as to why we have them-- and we'll elaborate on that-- is that the midget system, when you turn on the light, when you shine a small spot in the center receptive field, gives a fairly sustained response for the brief time that was maybe a second of two, OK? But the parasol system gives you only a burst, like that, instead of a sustained response. So think about that. Say why would we have two systems which are not only different in size, but also different in the profile of their responses? And then, once you have this question in your mind, then you'll be eager to find out when we talk about that time after next. OK, so now let's go back to the beginning here, and talk about the photoreceptors in just a bit more detail, so you can get a better picture of it. Here, as I mentioned to you before, in the fovea, you have a very, very tightly packed cone photoreceptors. They're almost hexagonally shaped rather than round, they're so tightly packed. But then when you go five degrees out, you first of all see that the cones got to be much larger. And now you have an intermixture of rods there that you can see. | in this region, you can process both information at low illumination levels and high illumination levels, whereas here as Keffer Hartline had proven, you can only see under fairly bright, normal illumination conditions. OK, so now if you go even further out in the retina-- this is a beautiful, three-dimensional looking picture-- it shows the rods and the cones in the periphery. There are many, many more rods per unit area, and the cones have gotten even bigger. And so obviously, the ability for the cones to process fine information has been degraded as a result. Now again, as I've mentioned to you before, we have in humans about 120 million rods and five million cones in each retina. OK, so now if you look at the overall distribution-- this is not too important-- you can see that as you go away from the fovea, there's a very rapid decline in the number of cones. This is the hash line. And then there are no rods in the fovea, but they increase rapidly, and then they fall off. So that's sort of the overall distribution. And therefore, you can see fine detail in the center, and you're very sensitive for night vision not in the far periphery, but sort of in the mid section here, anywhere between 20 and 40 degrees from the fovea. Let me remind you, because I'm sure you know all this already, a few facts, OK? I will just write down a couple of these things. First of all, what is degree of visual angle? That's a good question. What does it mean on the retinal surface? Just let me give you an interesting mnemonic. If you stick out your arm like that, and you look at your thumbnail, that imprints one degree of visual angle on the retinal surface. OK? Now we are going to talk a lot about very small measurements. And so when we talk about small measurement, I mentioned already that the tip of a microelectrode is about one micron. And what is a micron, anybody know? Yes. AUDIENCE: 10 to the negative 6 [INAUDIBLE]. PROFESSOR: OK, one micron is one-thousandth of a millimeter, OK? Now, talking about that in terms of millimeters, which you're going to do a lot-- that's the greatest convention-- you need to also know what is the relationship between inches and millimeters, centimeters? OK, so if you take one inch-- you all know what an inch is, I guess. Roughly, if you look at your thumbnail again, it's maybe a little bit under an inch. Now 1 inch equals 2.54 millimeters. So that's a very interesting mnemonic to remember. So whenever you look at various things and try to understand size, you may need to make that conversion, especially if you have always thought about things in terms of inches. Yes. AUDIENCE: I think you meant to write centimeters. PROFESSOR: What? AUDIENCE: Did you mean to write centimeters? PROFESSOR: You're right, 2.5 centimeters, 254 millimeters. You're right, OK. Sorry about that. Very good. Thank you for pointing that out. OK, so now what we're going to do next is we're going to move on, and I'll tell you another clever experiment that had been done. These people here, McCrane et al., developed a method which enabled them to selectively label the blue cells in the retina. And this shows a picture of that. All the rest of them are red and green cells. So only one out of eight cones is in the foveal area, near foveal area, I should say-- this is both the fovea and the perifovea is black. The rest of them are red and green cones, OK? Now let's become specific about the facts here, and tell you first of all that one degree-- roughly one degree, which I pointed out is like a thumb, comprises 200 microns on the retinal surface. The intercone distance in the fovea is only 2.4 microns, OK? Whereas when you-- per square millimeter I already mentioned this-- you have 200,000 cones. You go out five degrees, it reduces tenfold. And then-- this I already pointed out to you several times. That's a good thing to remember. And one thing I haven't mentioned before is that if you look at 12 font letter, I, and you look at it at 23 centimeters, that activates about 80 cones. So that gives you sort of a handle on what is the nature of activation there. And I already said that several times-- namely that each rod has 1,000 disks, and each disk in it has 10,000 molecules. OK, and as I've just mentioned, one out of eight cones is blue. The others are fairly equal in number, but varies from person to person. OK, so now we are going to tell you about yet another experiment. And this is an incredibly clever experiment that was carried out by Ursula Drager. At that time, when she did this beautiful experiment, she did this at Harvard, at the Harvard Medical School. And what she did was this-- she took the words Fiat Lux, put it on a screen, and exposed it that too dark to the animal-- the animal being a mouse, OK? And then her procedure enabled her to label, using a monoclonal antibody label, those cells that had been activated by Fiat Lux. OK, so this is, then, the actual retina, flat-mount retina, showing you what had been activated. Now that's remarkable, beautiful, and she came, and gave a talk here at MIT maybe about 15 years ago. And when she presented this picture, one of the students, it wasn't any of you, I guess, raised her hand and was very upset-- said, but, but, but-- and so Ursula said, yes? So what do you think this listener asked? Well, this was a very active thinker. This person said, but, but, but, was this image right-side up on the retinal surface? That was the question. And Ursula Drager said, no, it was upside down. I just rotated it 180 degrees so you can see it. All right, well, interestingly enough, there were times when this became an issue. First of all, before people knew about lenses and that lenses turn images upside down, and secondly, because many people didn't believe that the images were upside down in the retina. I mean, if images are upside down in the retina, they're upside down on your brain, and yet the world is right-side up. What's going on? So I would say about eight or ten years ago, I was asked to review a paper for a high visibility journal-- I can't name the journal-- in which the investigator very, very vociferously proclaimed that we have all been wrong, and that the images are right-side up on the retina after all. So what was his proof? His proof was that he took an ox eye, a slaughtered ox's eye, and cut out a little piece in the back of the eye, and put a translucent piece of paper over it, and then presented a stimulus out there. And when he looked at the back of the eye, at the translucent part, the image was right-side up-- a little bit blurred but right-side up. And so he felt that he proved that images are right-side up on the retina after all. Well, I was flabbergasted. And it took me quite a while to figure out why this guy got what he got, because he'd obviously got what he got. So can you guys think of any reason for this? How come he got what he got? AUDIENCE: Wait, what did he do? PROFESSOR: He took the eye, OK, and in the back, he cut out an opening and put a transparency over it, so that anything came in there, you could see what would have been normally on the retinal surface. And it was a clever experiment-- no question about that. So what was wrong with the experiment? Well, I'll tell you. What was wrong with the experiment was back to what I told you before, about the way the lens works, right? The lens works so that when you look at something close, the muscles of the lens are such that it makes the lens fat. And when the muscles relax, it makes the lens thin, OK? So that means-- just to be graphic about it-- that if you have a thick lens, you have a short focal length. And if you have a thin lens, you have a much longer focal length, like that, OK? So now if you look at the back of the eye here, where the photoreceptors are, in this case, it looks like it would be upside down. But if they were in the same distance here, it would be blurred and it would remain right-side up, just like it is with a magnifying glass, OK? So what this guy didn't know is the rule of how the muscle works in the eye to control the lens. And when the animal was killed, it no longer had the lens at the proper focal depth. And therefore, the image was right-side up. So therefore, guess what happened? I wrote back to this high visibility journal saying that this guy is wrong for this reason. OK, so that was certainly an interesting experience, and that's why I managed to learn a little bit about how the lens works in the eye, which is a remarkable way it works, since it's so totally different from what we are familiar with when it comes to cameras. OK, so now let's go to the next step, one down from your from photoreceptors. And we come to the so-called bipolar cells. Here's a series of pictures, the so-called [? cat ?] bipolar cells. And what was discovered is that the bipolar cells here in the outer plexiform layer, they connect with the photoreceptors. And in the inner plexiform layer, their axonal projections are such that in the upper layer here, which is actually layer A, the cells there are OFF cells. And the ones that project below this here are the so-called ON cells. So there's a separation of the so-called ON and OFF cells at the bipolar cell level in the manner in which they connect. And so therefore, those ganglion cells that connect with these are OFF ganglion cells. And those that connect with these are ON ganglion cells. All right, so that's basically the bipolar cells. And then we come to the horizontal cells. Here's a beautiful example of a Procion yellow-labeled horizontal cell. And if that is then analyzed in detail, it looks something like this. This particular cell has 15 connections with the photoreceptors. And these cells-- I might as well anticipate-- we'll talk about that in more detail later-- they play a central role in providing the surround inhibition that I described to you for those retinal ganglion cells. Then if we proceed to the amacrine cells-- and again, things are so complicated-- the numerous, numerous amacrine cells, different types-- and they have been studied extensively-- they are more than 20. And here's a clever experiment I want to point out to you. What you can do here, if you look at the called cholinergic amacrine cells-- that's one class-- you can selectively label them, because they're cholinergic. And here's an example. All those lighter blue ones are your cholinergic cells. And then what you can do, you can put this under a microscope. And then you can lower a microelectrode into one of these cells, just like I've done here. And if you look at this closely-- this is not the best of pictures-- this is a Procion yellow-labeled amacrine cell-- you can see it has all these processes. So then if you look at this on the microscope under the high quality, you can draw these out. And this is what they look like, OK? So this is a cholinergic cell. This is another type. It's just called A2, which has a lot to do with rods. And then yet another class called open [? minergic ?]. I'm not going to go into detail about it. But you can see that each of these now-- scientific methods enable you to specify what these amacrine cells are like. And on the basis of that, people have come up with ideas what the amacrine system does. OK, now we come to the crux of things in the last few minutes. We're going to talk about the electrical responses in the retina. Now this is going to be, again, fairly complicated. This work-- again remarkable work which, in my view, merits a Nobel Prize, but it has not yet been awarded to John Darling at Harvard. And let me just introduce this by telling you how they went about this experiment. They realized that the cells in the retina are very, very small. And so it's very difficult to record from them intercellularly. You have to do that if you want to be able to label them anatomically. And so they looked around for a species of animal that had large cells in the retina. And they discovered after a lot of search that the so-called mudpuppy-- Necturus maculosus-- is an animal that has unusually large retinal ganglion cells. Then they developed a technique of removing the eye from this animal, and putting it into a dish, because that makes it real stable. And then they were able to put an electrode in there, define what the cells did functionally by shining light on them and looking at the activity, and then they were able to label the cell by injecting a labeling substance. So that's what they did. And so here is a description of that arrangement. Here you have an inverted mudpuppy retina, and here you have a DC-recording electrode. And what you do then, you look at it in an oscilloscope. And at this point, the cell is entered by the DC recording. And then once it's inside, cells inside are negatively charged with respect to the surround, usually up to about 70 millivolts. In this case, I have 50 millivolts labeled. And so once you're inside the cell, you see a sudden drop to a minus level. And then if the cell discharges an action potential, then you see that, of course, in the oscilloscope. And then you can study this cell in all kinds of detail as to how it responds, what it responds and so on, to determine what its characteristics are. So then when they did that, they made the following central discoveries-- they discovered, first of all, that receptors all hyperpolarize to light. Now this is the opposite of what you would have thought, because the principle of the way cells operate-- and I'm sure you know all this, but I will also describe it in a bit more detail in a minute-- cells when they hyperpolarize, they are less likely to cause a release in neurotransmitters. If they depolarize, they will increase the likelihood of neurotransmitters being released. And by the way, the neurotransmitter for these receptors is glutamate. And also, which I should emphasize here, is that the receptors never give action potentials. They only give graded potentials. That's also true for the horizontal cells and the bipolar cells. They found that all of receptors hyperpolarize to light, meaning that they activate subsequent elements in the retina when it gets darker out there, rather than when it gets lighter, which is the opposite of what we all would've expected. Same thing for the horizontal cells. But then when you come to bipolar cells, suddenly you find that there are two different classes-- one class which mimics what the receptors do, the other class that does the opposite. And so the bipolar cells of these two types-- this type has sign conserving synapses, and this type has sign inverting synapses. Now amacrine cells come in various types, and some give action potentials. And ganglion cells all give action potentials. So to go through this again, all receptors hyperpolarize to light. Horizontal cells hyperpolarize to light. About half of bipolar cells hyperpolarize and half depolarize. Now this is accomplished-- by the way, I'll go into that in more detail later-- by virtue of having different kinds of neurotransmitter receptor sites. And I'll talk about that in a lot of detail when we talk about the ON and OFF cells. OK, and then we have the amacrine cells. Some hyperpolarize, some depolarize, and some give action potentials. And all ganglion cells give action potentials. So that is a major, major discovery, and not only major, but totally unexpected from what one would have thought. All right, so now let me just explain this in a bit more detail. If you go inside a cell, then if this cell is activated, it can be either get-- what you see is an excitatory post synaptic potential, which goes from minus towards plus, or you can get an IPSP, an inhibitory post synaptic potential. And this discovery was made by Eccles, which is a remarkable discovery for which he received the Nobel Prize some years back. All right, so now if we look at those cells which give action potentials, what happens is when you get an EPSP-- that's why it's called excitatory post synaptic potential-- then you increase the likelihood that an action potential will be generated that will send its signal down the axon to the next location of the cells. All right, so that's the basic process. So again to reiterate, photoreceptors hyperpolarize the light. Therefore, glutamate is released when there's a decrease in illumination-- the opposite of what anyone would have expected. All right, so that I think brings us to what I want to cover today. I'll wait until next time to talk about the lateral geniculate nucleus, which is very brief. But let me go through the four receptor basics again. All photoreceptors hyperpolarize the light. Depolarization of the photoreceptor releases glutamate. Photon absorption by the photo pigment results in isomerization of the chromophore from 11 cis to 11 trans. This is what I told you to think about in a very simple fashion-- to say that the photoreceptor molecules-- rhodopsin in this case-- is either bleached or unbleached, OK? So once it's bleached, further light has no effect on it. And at any point, given whatever the level of illumination is, a certain percentage of these rhodopsin molecules are bleached and a certain percentage is unbleached. The darker it is, the more are unbleached. All right, then we have these two classes that we talked about-- the ON and the OFF cells that we will discuss in some detail-- the synaptic junction of OFF bipolars is sign conserving, as I just said, and that of ON bipolars is sign inverting. Now the ON bipolar receptor is-- I should mention that now, and I will deal with it in detail-- is called the mGluR6. It's activation leads to closing of channels causing hyperpolarization. that's the basic layout, then, of the photoreceptors. And I will now stop at this point. And next time, we'll talk a little bit about the lateral geniculate nucleus. And then we're going to move on to talk about the cortex. then if you look at you assignment sheet, you can see here that on September 16th, we're going to talk about the ON and OFF channels. And then we'll deal in more detail with these rather complex things which are pretty hard to remember, about hyperpolarization, depolarization, glutamate, glutamate receptor sites, and so on. OK, thank you.
MIT_904_Sensory_Systems_Fall_2013
8_Form_perception.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, so this brief reminder here is about the basic requirements in this course, which are that we have two written reports. And that's the main thing I want to remind you of, one in vision and one in audition. And I presume all of you have the syllabus that was a hand out. And in that syllabus-- and if you don't have it, we can provide you, of course, with a copy. And on the second page of the syllabus, it says written report, vision part, and this report is one that is based on an article published many years ago, which is very influential article back in 1967 about the so-called accessory optic system. And what I would like you to do is to read that article but then proceed through the internet to see what have become the most recent discoveries about the so-called accessory optic system. And that's what your report needs to be about. You can be reasonably summary about it so that you don't need to write 50 pages. But you know, you can write this whole thing up in four or five pages, taking into account as I've said what has been discovered since those days about the so-called accessory optic system. The reason I thought this would be a good assignment because it will give you a good historical sense of how knowledge has expanded, in this case, since the 1960s, in uncovering yet another attribute that you have in the visual system that starts with a specialized set of cells in the retina and then proceeds through a special series of steps until it connects with the ocular motor system. So anyway, that's the task. And that will consist of 10% of the report, of your total grade. And the same thing is 10% also for addition. Then the midterm exam is going to consist of 25% of your exams. And the final exam is a total 55, a which 15 is vision and 40 is audition. So if you add them all up, you have an equal distribution in percentages for vision and for audition. So that then is the basic layout. And so what I would like you to start thinking about is how you're going to put this report together. I was hoping that perhaps you could get it done by midterm. But I don't have a hard and fast rule about that. As long as you get it in before the final exam, that's fine. But it may make life easier if you can start working on that while we are talking about vision, rather than waiting until we are covering the auditory system in the course. Does anybody have any questions about this basic layout? OK. All right, now today we are going to talk about form perception. Let me say one more thing, that next time as you have it on the syllabus, we are going to be covering illusions. I think that'll be fun. You'll see all kinds of interesting illusions and some inferences as to what we think about how those illusions come about by virtue of the fact that there are all sorts of interesting rules and facts about how the visual system works. And then during the second half over the next lecture, this is this coming Wednesday, we are going to talk about visual prosthesis. All right, so that in essence what is we are going to cover next time. So now let's get back to today. We are going to talk about form perception. This is a topic that, even at this stage, we still only have rather limited knowledge about how the brain is capable of carrying out the perception of form. And we are going to look at some inferences and some ideas about it. And I will also provide you with a brief historical background. Because this has been a topic of tremendous interest for people for centuries, trying to understand just how we are capable of seeing forms, shapes, patterns. All right, so the first influential idea that emerged about this is called structuralism. And this idea actually took place in the late 19th century. And one of the prime proponents was a fellow called Titchener. And they had, in trying to come to grips with how we do this, had a rather basic idea, much like building a house with bricks. The idea was that perception is an aggregate of simple elements. We just have a whole bunch of elements that you put together. And that generates a more complex precept, OK. The problem with this approach, well, the first problem was that they began to do experiments. And they asked subjects to try to divorce a personal impression of anything they looked at. But to sort of physically describe it. So for example, if you looked at an apple, how do you have the idea that it's an apple? And so they said, well, it has say, four colors. And so the person would list four colors and said would have several shades. And put all those together, add them all up, and that equals apple. Well, that was the idea then. And when they did this systematically, and the thing became almost ridiculous. Because they came up with more than 40,000 elementary sensations, thinking that somehow you puts these elementary sensations together, 40,000 of them, and that gives us a sense of an apple, or banana, or whatever. Now as soon as this sort of became well known, a lot of opposition arose. And one of the most famous ones then was the opposition that was brought about by Gestalt psychologists. And the first consideration that I want to sort of bring up here is a rather simple picture. Here's a picture everybody looks at that. OK, what did you see there? You saw a car, right? Now if you look at it, I mean this is actually a picture of a car, all right, but it's interrupted by all these vertical bars here. And yet we're able to infer that it's a car. And so the question comes up, how do you do this? I mean, you can't simply just add up a bunch of bricks and say it's a car. Because there's all this interruption. And most things that we see in the world, we see in a discontinuous fashion. And as a result of this, the Gestalt psychologists, which happened sort of in the 1920s, began to think about this whole problem in a very different way. And they made a tremendous influence. And they came up with these new ideas of how we create the perception of form. The founder of the Gestalt psychology is a fellow called Max Wertheimer. So now, you can look this up in Wikipedia, by the way, if you look at them more closely, they came up with a few basic principles of organization. And they call this grouping. They argue that in the brain, somehow we group things up. And one of the things we group has to do with proximity. Another one has to do with similarity, another one with common motion, another one with closure, and another one with figure-ground perception. This last one I'm going to talk about at the end of today's presentation. But now I'm going to give you an example of these top two. All right, so the most important conclusion that they had come to which was very much against structuralism is that the whole that you see is different from the sum of its parts. Somehow, there is an active process that creates our ability to see something that is not evidenced from its individual parts. Now here is an example of what these principles are. Grouping by proximity, so here we have a bunch of dots. And if you'll put them closer together vertically, you see a bunch of vertical lines essentially. And if you put them closer together horizontally, you see a bunch of vertical lines, horizontal lines, vertical lines. Yeah? So we group things due to proximity. Another reason we group things is shown here, is we group things according to shape, or similarity of shape, I should really say. And here what we have, you can readily see a group of nine disks here, and a group of nine triangles. And if they're not all the same as down here, then you have much more difficulty grouping it. So there's a strong tendency that we group things together that are similar. So these are some general principles of how we organize our visual percepts. So now we can look at some more of these examples. In doing this, we are now going to try to be able to say a bit more about the brain itself. And in doing so, it's evident that the three major theories that try to deal with how the brain does it, according to the first of these, form perception is accomplished by neurons that respond selectively to line segments of different orientations. Now that theory, obviously, is the outgrowth of what we talked about when it was discovered that in V1, we have orientation selected neurons. And this orientation selectivity is something that you see in several progressively higher visual areas, as if everything in the world out there was broken down into oriented line segments, which are then somehow put together according to there orientation to enable us to see shapes. So that's one theory. Another theory is that form perception is accomplished by spatial mapping of the visual scene onto the visual cortex. And I will elaborate on each of these. And the third theory is one that form perception is accomplished by virtue of Fourier analysis. So let us first then turn to form perception, supposedly based on breaking down the visual scene into oriented line segments. Now, one of the big problems with this is that when you look at a famous art work that you see in every day in the Wall Street Journal, there is an artist who created this originally. And he created faces like this using only dots and varied the spatial frequency of the dots, as you can see here. This happens to be a person called Larry Poons. I'm sure none of you have heard of him. But here was one of the pictures many years ago in the Wall Street Journal. And even today, as I say, you can see faces created this way. And you can readily recognize them, even though there are no oriented line segments here. And in fact, if you sort of squint up and you three quarter ways close your eyes, so you can't even see the dots anymore, you can still make out that face very, very clearly. That's because the spatial frequency effect here is very important, and the degree of shading, meaning that this involves rather low spatial frequency analyses, rather than the analysis of particular orientations of line segments. Now this particular analysis has its counterpart in the observation that many of you have seen. When you look at a person's face on television, and they want to prevent you from being able to see that face because it's confidential or something, what do they do? Anybody remember? What you do is you put up a bunch of squares of different spatial frequencies. And each square is comprised of the mean illumination level of the actual face. If you do that, this high frequency information is something that interferes with your ability to analyze face. So to give you an example of this, I'm sure all of you have seen this, but mainly I show you this. Here we have this example. How many of you can tell who these people are here? I guess nobody can, right? Now then what we can do is we can increase the frequency of these. Here it is. And it's still very difficult to tell. And now I'm going to show you the actual photograph. And what I'm curious about is how many of these two people you actually know. I can tell you right away, they're extremely famous actors and actresses. Who recognizes these people? How many of you recognize? Let me see your hand. Only a few people recognize it. That's how quickly time goes by. These were the most central, most exciting, best known people in the movies. And this is Humphrey Bogart, Ingrid Bergman. OK, so these are these two very famous people whose faces were obstructed by this and by this using these high contrast edges that obscure your ability to smoothly analyze faces. So that then suggests that this idea that we extensively use oriented line segments to analyze faces at least is an insufficient explanation of how the brain processes shapes. Now another idea is so-called topographic mapping. This idea I'm sure you can already reject on the basis of what I told you about how the visual system is laid out. But now I'm going to belabor that so that you can follow it closely. Here we have a runner, of course. And here we have a monkey brain to make it easy to understand. And the idea here there is that this image, once it was discovered that the visual field is laid out topographically in the visual cortex, the idea was that what happens is somehow, and I may be am unfair to poke fun of it, but the idea is that the mind can look at the creation of this image on the cortical surface there. And thereby, it can identify. OK, it's almost like a photograph, looking at a photograph. In this case the mind looks at the photograph, so to speak, on the cortical surface. So they thought at the time is that this is what you have there, OK. That's the imprint of this image. And indeed, you say, oh my goodness, that's just like that. Therefore, I can recognize that person, and so on and so on. Well, I mean that's a cute idea. But then when you take into account this has been discovered subsequently to these ideas, that the topographic layout in the visual cortex is actually not one to one because of the magnification factor. So let's look at that in some detail. Here is an actual reconstruction of the monkey area V1 here. And here is the visual field. And we are going to put these red arrows in the contralateral hemifield. Now remember what I told you before, that if you look at the visual scene, you can imagine your eyes being vertically cut in half. And you have a nasal and temporal hemiretina. And the one which is contralateral hemifield crosses over and gets into this half of the brain. And this one crosses over to that half. So now if you do this and take the magnification factor into account, you look at these arrows, which are all identical in size, OK. There are one, two, three, four, five, six, seven of them. What you can see here is the actual impression on the cortical surface of the neurons that are being activated by this look like this, nothing like those arrows there. And in fact, the central arrow is much bigger because of the huge magnification factor. Now this is already creates a major problem in trying to believe in that particular theory, the topographic mapping theory. But now, if instead, you put those arrows halfway across each of the hemispheres like that, then whatever is on this side goes here. Whatever is on this side goes there. And this is the impression that is created. And my goodness, you create that impression. And you say to yourself, my god. If that's the case, how come I can see seven straight arrows of equal size when this is the impression that is being created in the brain? So therefore, obviously, they are not using a topographic map to analyze the visual scene at all. Now this can be driven home even further. Let me make another point here. Here we have a monkey visual cortex again from the rear. Here is the visual field laid out. And if here we put in a bunch of dots in a circular fashion, this is the activation there. Now you say, oh, that's not bad. That looks like a circle. But then if you put it along the midline, half and half, then this is the actual activation. But you still see a circle, even though the activation, in terms of the topography, is nothing like a circle. So that indeed created a much, much greater degree of skepticism among investigators to try to understand how we process shape. Now let me make one other point here, which is a wonderful story, which I think will be good for you to remember. That's called the Giotto story, which says that when Pope Benedict, that was in the 12th century, or the 13th century, he set out to have the walls of the great cathedral of Saint Peter in Rome redecorated. And so he sent out a bunch of messengers to various artists in Italy and asked them to provide some of their best work so he could evaluate it and could pick one guy to actually do the redecoration. Well, one of these messengers went to Ambrogio Bondone Giotto and asked him, he was a well known artist, and asked him to provide a painting of his, a drawing of his. And Giotto said, oh, my goodness. I just don't have anything around. But I tell you what I'll do. He took out a red pen and drew a perfect circle, OK. And so the messenger took this to the pope. And the pope said, my god. I can't believe how incredible this is. And so Giotto got the job of redecorating the cathedral of Saint Peter. To this day, there is this expression in Italy, in Tuscany in Italy, which says the round O of Giotto, OK. So this is the round O of Giotto, which somehow, in a sense, denotes perfection, perfection in sight and in perfection in your ability of execution in terms of a drawing, for example. All right, now to highlight this even further, let me point out this to you here. What we have here is a bunch of imperfect circles, one of which is perfect. So if you keep looking around, you should be able to spot which one it is. And you should tell me what letter denotes that circle. Which one is a perfect circle? AUDIENCE: C. PROFESSOR: Very good, all right. So we are incredibly good. There's a slight difference here, OK, or even slight difference here. And yet, we can see this very slight difference. And we can tell what a perfect circle is. I mean, that's incredible given how the impressions are made by those circles on the visual cortex. So that is an incredible puzzle of how we are capable of doing this. And I'm afraid, even to this day, we don't have a really good answer of how this happens. OK, now the third theory that has become actually probably right now, one of the most successful ones, it claims that you analyze the visual scene by taking into account the spatial frequencies that are impinging on the retina, OK. This was created by Fergus Campbell and John Robson, very influential. And they pointed out, first of all, that a very interesting finding, which I think I've mentioned once before, that if you vary the spatial frequency [? both, ?] as well as the contrast, we have this sensitivity function like this. I did show you a picture of it the last time. And then, furthermore, what they had shown is that you can create all kinds of complex precepts by varying the gratings. Make them simple gratings, compound gratings, and compound gratings with much lower contrast. And this is simple different spatial frequencies. These are compound. And these, as I've said, are lower spatial frequencies. So then if you again squint, then you will see something much smoother. So they actually carried out a detailed mathematical analysis using Fourier analysis. And what they did was quite remarkable. They would down break down a visual scene, like a photograph of New York with all the skylights, OK, skyscrapers. And then they would convert that using Fourier analysis. And they could recreate the visual scene with a high degree of accuracy using that procedure. Now if indeed the visual system uses this, you have a number of basic logical requirements. And those basic logical, let me come back to that, the basic logical requirements are that you need spatial frequency analysis. And I've shown you that already when we talked about V1, that neurons there are spatial frequency selective. Secondly, that there are contrast selective, which you know already. And of course, the orientation selective, and they can tell you about phase. As long as you have these four attributes, you can perform a detailed Fourier analysis to reconstruct the visual scene. So now to stress this even further, they did a series of experiments, in which they asked a question, is it true that you have a particular spatial frequency analysis, that you can manipulate that? And so they did an experiment, which called a frequency-specific adaptation experiment. They would present to a subject this display and have the subject look at this for a couple of minutes without having to fixate. And then they would look at each of those. And they found that this, which is the same spatial frequency, they had difficulty seeing because of this adaptation. And so then they did a series of careful studies and carried out this analysis and showed that you could get any kind of spatial frequency to lose your sensitivity for it if you had been pre-exposed to it. So that's what that looked like. And by doing this systematically, they came up with the idea that what you have in the visual cortex is a series of channels that are spatial frequency-selective. And I showed you we talked about V1, that indeed, there are neurons there that are selected to particular spatial frequencies. And they proposed that you have a series of channels like this, OK, that peak at different spatial frequencies. And by activating them selectively, you can reconstruct virtually anything out in the visual scene using Fourier analysis. Now OK, I think that then is the essence of that theory. And I can tell you that some people avidly believe in it. And there are some people who are highly skeptical. I'm not sure where I stand at this stage about that. But now, people began to study our shape, the ability to see shapes by recording various visual areas. And I want to tell you next about some studies that had been done primarily in Japan looking at inferotemporal cortex, which is [INAUDIBLE] already mentioned that to be involved in the analysis of shapes and in particular, in the analysis of faces as well. So what these investigators did is they would record to individual neurons in inferotemporal cortex. And when they did that, they would present various stimuli to see how those cells responded. And they found, they claimed to find, that there was some incredible specificity in inferotemporal cortex for shapes. And so what they did, here's an example. Here, we have the neurons' responses, histograms. And on top, above it, we have the particular shape that was presented repeatedly. So this particular cell responded vigorously to this, but very poorly to that, and so on down the line. But you can see that it responded to several different shapes, quite well to this one, reasonably well to that one, so a whole range of them. So it wasn't like this particular neuron responded only to one particular shape. But that idea that you have neurons which are specific for certain shapes did take on a lot of attraction on part of investigators. And some of them indeed thought that you have these neurons which are selective to individual elements in the visual scene. And those subsequently the critics refer to as your having grandmother cells. Somewhere in the brain there's a cell that represents your grandmother, all right. Now that idea was disabuse subsequently. But it took very stronghold in many investigators' minds. And here's another example of yet another inferotemporal neurons, in this case again for various shapes, showing that this shape was, elicited a lot of responses, as well as did this. The rest of them didn't respond, didn't elicit as much of a response. So these kinds of experiments then, they tried to systematize. And they came up with an idea which, at least to my mind, borders on the absurd, which is shown here. Here is a section of inferotemporal cortex. And they argue that they are columns there, and that these columns represent different percepts. So here we have a bunch of percepts that process the monkey. This is a monkey, of course, that process monkey faces. And by inference, therefore, there must be some areas in inferotemporal cortex that process human faces in humans. And then others process different aspects of the visual scene. Now the big problem, technical problem, with this approach is that when you record from individual neurons, all right, typically in these experiments, you can study a single neuron for a fairly limited time, not like for months on end, just maybe a few hours or something. And so because of that, you can only present a limited number of visual stimuli. Now there are millions of visual stimuli out there. And so to really establish how specific this particular neuron is, they say well, these are the only shapes it's showing us. What if you use the cross, or who knows, many, many other things, something that is three dimensional? How would these cells respond? And so these cells respond to different degrees, to hundreds and hundreds and hundreds of different stimuli. And the real fact then is that anytime a stimulus appears out there, you're activating tens of thousands of neurons, maybe even more, in the visual system. And each of those neurons, especially in inferotemporal cortex, responds to different degrees to the different stimuli. It is the compendium of these many, many neurons firing into different degrees to different stimuli that gives you sort of an overall computational ability to say what that stimulus is. Now to be able to analyze that, it is that complicated, that really takes a lot of effort. Now some people are now trying to do this. And the way you try to do this is that you're recording from hundreds and hundreds of neurons with multiple electrodes, present various scenes. And you see how these neurons respond as an aggregate, so that you can determine whether or not there is indeed the potential for some sort of computation that takes place in gaining the impression of a particular individual face. Now one of the very important facts here is, of course, that as you look around the visual scene, let's imagine you're looking at a face. You're looking at a face head on. You're looking at a face in a profile. You're looking at a face tilted. You're looking at a face close. You're looking at a face far. And it's still all the same face. And yet the impressions that face makes on the visual system, in the retina, in the geniculate, in the visual cortex, varies a great deal. And yet we can come up with constancy, which is sort of a higher level process. And I'll come to that in short order. All right, so now, therefore, we need to start to talk about what we call intermediate level vision. What is intermediate level vision? So far, we've talked almost exclusively about basic visual capacities, color, brightness, pattern, texture, motion, depth, the very, just the very basic types. But now when we talk about intermediate visual capacities, we talk about constancy. One example of constancy, of course, is that the face that you're looking at, whether it's profile, or head on, or near, or far, it's still the same face. So we get constancy out of it. Then there's an important necessity to be able to select various aspects out of the visual scene, to be able to select if you recognize things, to induce transposition and variance, and to be able to make comparisons, and also lastly, to be able to say where things are in space. So those are these intermediate capacities. And we'll talk a little bit about them next. All right, so here's an example of constancy. What we have here is a bunch of words for doubt in different sizes, and orientations, and some handwritten, and so on. And yet all of them say the same thing, doubt. We extract that, OK. Now to make this even more difficult, I'm going to tell you that there's one of these words here which is not doubt and see who can find the word that is not doubt. Everybody find it yet? OK, right there, not doubt but doubts. And so we are able to extract the difference in this kind of visual scene, even though it is incredibly subtle. And yet, we are also able to say that this doubt, and this doubt, and this doubt are all the same, even though they're different size or different in print. We can extract from that the common element. So this is an incredible capacity on part of the visual system. And of course, the big question comes up, how is this achieved? And at this stage, we only have very preliminary answers to that. OK, so now I want to show you a couple other cute things here. This is another famous artist, Hirschfeld, He is no longer with us, unfortunately. He published hundreds of these little cartoons in The New Yorker, all right. Now, you see his named here, Hirschfeld. What do you see at the end of his name? What's that-- what is this? What, that? Huh? A number three. Why does it say Hirschfeld three? His name isn't Hirschfeld three. And then if you've seen several of these, how many of you have seen these before? Oh, you missed out on something very good. At least look it up sometimes again on the internet. Just type in Hirschfeld. And some of these will come up. So he says three. And some of his pictures may have one. Some of his pictures may have five or six at the end. And you say, well, what on earth is that? Anybody know? Yes. AUDIENCE: Doesn't he have other names hidden in the images? So maybe that's the [INAUDIBLE]. PROFESSOR: Ah ha, you're thinking well. OK, that's getting there. OK, so he's telling us something that there are three of in this picture. That's what he's telling us. OK. So what is the three of that he has in this picture? Ah ha, anybody want to come up with? It AUDIENCE: The line orientation? PROFESSOR: Huh? AUDIENCE: Like the orientation of the line? PROFESSOR: Orientation? No. Mhm. OK, I'll give you a clue. Hirschfeld had a daughter-- AUDIENCE: Nina. PROFESSOR: Who he was very fond-- hey, fantastic, Nina. OK. So had a daughter, Nina. And he decided in his display to put her name up there. And now he said three, that he put her name up there three times. Now who can find all three of them? AUDIENCE: [INAUDIBLE] the arm [INAUDIBLE]. PROFESSOR: OK, here's one. Here is another one. OK, and here is the third one, in the arms. And so if you, that's one of the fun things you can do. Back when he was doing this actively, every time he had a cartoon like this, I would spend some time trying to see where were the Ninas. [LAUGHTER] So that I think is quite remarkable and further highlights the complexities and the remarkability of our visual system. OK, so now, I'm showing you yet another picture. In this case again, you have to be knowledgeable about something. Does anybody know who this is? Oh, all right. Well, let me see if I make it smaller. Can you see that now? Who is that? OK, that's Voltaire. All right. Now Voltaire, 18th century, is a guy who had outdone all of us. And he had written 2,000 books. Can you believe that? The guy had written 2,000 books. But that's not the prime point here. Most of us who know a little bit about history immediately recognize him as, oh yeah, that's Voltaire. But then if you go back here and you look at this more closely, and so what I'm going to do here is just to tell you what a remarkable artist. The guy who did this is Salvador Dali. This, by the way, is his wife, OK. And so what I'm going to do here, I'm going to blow up the center portion of this figure. There it is. Can you see this? This is two faces, two nuns. And this is their outfit. So there's nothing there that is really Voltaire. But he's such a remarkable artist that he could create Voltaire by playing around with this, OK. So let me go back to the display again. OK, so this is Salvador Dali. And certainly, it's an artist who is remarkable. He has many paintings in which this kind of a double confusing thing, playing on your ability to see. So because of that, for people like me who's interested in how we see things, certainly I'm very intrigued by his artwork. And you may enjoy looking that up again on the internet. OK, so now I'm going to tell you yet another interesting aspect of art involved in this. And this one has to do with a book, a very clever book that was written by David Hockney. Has anybody ever heard of him? I'm not surprised since he's not that well known. But the book he wrote is called Secret Knowledge. Now what was that all about? Well, he analyzed how artists created paintings way, way, way back when and in subsequent years up to the present. And now I'm going to show you some of his pictures and give you a sense of what this is all about. Here is a picture, OK, by Masolino da Panicale in 1425. This was a typical kind of picture back then. Artists had a very poor sense of how to create depth perception in a painting. That was before they came up with vanishing point. And so this kind of looks flat and fairly expressionless. This was in 1425. And then Hockney noticed that just five years later, another artist came up with a picture, just five years later, that looks almost like a photograph. And he said, what on earth has happened? And that's why the book is called the Secret Knowledge. So what happened? Anybody know? Ah, all right. Well, let me tell you what happened. What happened is that at that time, in the 1400s, they came up with the lens, OK. And so they created a device called the camera obscura. Does anybody know what the camera obscura is? It's essentially very similar to a camera, OK. So here we have it. Here's a camera obscura. And what they did, these artists, they would create a building. So it will be dark inside. And then they would put the lens here. And they would put some object out here that they wanted to make a painting of. And that would be reflected onto a piece of canvas here. Of course, it would be upside down, right? And then they would paint it. And so once it was painted, they could turn it around and finish it and sell it. Now, the reason they did this, the prime reason they did this, is because it was much much, much quicker to create a portrait, for example, by using this procedure than to actually look at a person and paint them on the canvas while looking at it. OK. So that was done. And of course, this was sort of a no, no thing. And because of that, it was kept secret. All these artists who did this, and some very famous artists did so, were very careful never to disclose to the public that they did this kind of thing. Because it was conceived to be kind of a cheating thing, OK. So what happened then was that all these paintings were created. And here's an example by van Eyck in 1436. Again, this looks much like a photograph. But some other thing is a bit distorted. So Hockney undertook a careful, detailed analysis of how we could tell whether a painting was a real painting or one that used the camera obscura method. And that's also a real painting in a sense, but a painting using the camera obscura method. And he came up with a series of criteria which are listed in the book. But I'm only going to deal with one of them. OK, so here is an example. This is in 1597. This is by Caravaggio, OK. And what is notable about this person? Well, to Hockney, what was notable is that this person is holding the wine glass in his left hand. Yeah? And he said, huh. That's curious. And then he looked at a whole bunch of other paintings. And he had one in which there were three people on the painting. And all three of them were left handed. Yeah. And he said, my god, that is really curious. And he said, well, let me analyze what happens when you do this kind of stuff with the camera obscura method. So here's the example of this. This is the original image, he claims, meaning the person is right handed, is not left handed. Then you put this person through the lens and put him up here upside down like that, OK. He's painted on the canvas like that. And then what you do is you rotate this 180 degrees. And when you do that, lo and behold, the person becomes left hand. And to make this clear, I added the F here. So this is a normal F. This is what is projected onto the canvas. And this is when you rotate it 180 degrees. So you reverse the left according to Hockney, reverse the image. And so it was a dead giveaway that most of the people who appeared left handed in his paintings, in these paintings, I should say, not just Caravaggio, but several other people, used the camera obscura method for creating the painting. All right, so that is the process. And then I want to show you one more example of this. OK, this is a very famous painting also, all right. This is the so-called marriage of Giovanni Arnolfini. And this by van Eyck, again 1493, way, way, way, way, way, way back then. Now this is a famous painting. Now there's one interesting thing first of all I want to point out. You see this here? That's proof that in those days they had come up with the lens, yeah. Now what is wrong with this picture? This guy who is about to marry this woman is holding her with his left hand. I mean, that's unacceptable. You're supposed to hold it with the right hand, yeah. Now the reason he's holding it with the left hand is because the artist, van Eyck, used the camera, according to Hockney, used the camera obscura method to take this, to paint this picture, OK. And then he rotated 180 degrees and became, he became left handed as a result. Now if instead of having done that, you would go back, you don't have to go back. Go back to here, and let's go back to this. If you take this guy here and you create the picture. But then instead of rotating it, if you could flip it, then you would, he will remain right handed. But of course, we can't do that because it's on a canvas. It's not on a transparency. So that then is the interesting story of artwork that was created using the camera obscura system that further the highlights the amazing interestingly complicated manner in which we can analyze the visual scene for shapes. All right, now another factor that is in a similar vein has to do with the recognition of faces. Lots of experiments have been done, including several in our department here, that has recognized that, unfortunate use of terms, recognition, recognize, became aware of the fact that facial recognition depends very heavily on seeing faces right side up. When faces are upside down, you have great difficulty telling who is who, OK. So let's do that. Here are a bunch of faces. And I bet you can you can tell this one, right? Who is it? And who is this? Very good. And those two you don't know. The problem is that I am going to flip this over now. And you still don't know who those two are. OK, so this one is Norbert Wiener. Now all of you know about Norbert Wiener. He is one of the great geniuses of our time. He came up with the digital code. He used to be a professor at MIT. And this here is Chuck Vest. Anybody recognize him? Chuck Vest was a president of MIT for what? For eight years, I believe, maybe more, 12 years? I forget. And he has sunk into obscurity, even though he was incredibly visible for many, many years. If I had shown this to you say eight or 10 years ago, you would immediately recognize him. Because he, at that time, was a president at MIT. So that then is an interesting fact that even though we are capable of using intermediate level vision in a very sophisticated way, it does not seem to work that well for upside down faces. You guys did pretty good with those. But it takes a while. If I had flashed that on in a tachistoscope, you wouldn't have had a vague idea of who those people were. But once it's on for a while and you can analyze it, you can eventually tell even an upside down face. OK, now in the same vein as we are talking about, our ability to process shapes based on contours and whatnot, one of the interesting set of experiments people had done is to look at what is called subjective contours. And so let me give you a couple of examples of that. Here's an example. Almost instantly, here you can see a disk. And here you can see a square rotated 45 degrees, a diamond if you will. But if you analyze this carefully, you can see that about 80% of this border here is not a border. There's no border here. There's no border here, here, here, here, here. And yet we can see a square. So there's some strange ability in part of the visual system to complete inferred borders. All right, I'll come back to that in just a minute. Now another example of this is shown here. Can you make out what it says here? If you look a little bit, you should be able to see. What is it? Visual system, very good. It was difficult to see. But as soon as I turn this into color, you have no trouble at all. And that highlights another important reason why color vision is so useful and has evolved. Because it enables us to see borders, where under certain conditions and lighting conditions, borders would not be visible on the black and white. OK, so now that's another interesting example, has to do with further subjective contours. And here's an example of one with a high contrast, where you can readily see a cube. Does everybody see a cube there? All right, but if you look here, it's next to impossible to see that. Because here the stimuli are isoluminant. So here you eliminated contrast, which is a very important aspect of being able to analyze the visual scene in various ways, including three dimensions. All right, so now here's a very interesting discovery that was made by recording an area, V2. The recordings were then made to see whether those neurons perceive subjective contours. So here's a receptive field. And you take this bar and move it back and forth across it. And you can see it gives a vigorous response. Now you do the same thing. But you don't, here you have only a subjective contour. And then you move this back and forth across. Somehow the information is added up from other areas, so that this cell responds, not that well, but responds reasonably well to the subjective contour. So it's said that in V2, you can carry out some of these higher level processes that enables you to complete figures even when they are incomplete like that. OK, here's another example of that. This is even more dramatic. In this case, we take this bar back and forth across. That's a vigorous response. And then you do the same thing. You create a bar here simply by these continuous horizontal bars. And still, the cell responds quite well, as if it were seeing an edge here. So we have this kind of completion in area V2. So this is sort of an initial hint then that already in area V2, you begin to process higher level events, among which is the fact that you can complete incomplete contours. All right, so now, the next thing we're going to turn to, we're going to ask the question, when we deal with these so-called intermediate level visual capacities, what happens when you take out such areas as V4 and MT to your ability to see these intermediate visual capacities? So let me describe to you some of these, how one would do experiments like this. First of all, monkeys are trained. This is done with monkeys, of course, because we can't just take a human and remove V4 MT. And so what you do here is you first present a fixation spot. Once the monkey fixates, you present a shape, in this case a square. And then you present a whole bunch of other ones, only one of which is the same as the original. And the monkey has to make a [INAUDIBLE] there to be rewarded. So he has to be able to detect identity, which is an intermediate visual capacity. So that's what this is like. And so then what you can do here is to vary the amount of information you can provide, again by reducing the amount of contour information that you can provide. You can do this in several ways. First of all, before we do that, I'll come to that in a second, let's just see, how does the monkey do when you do those very similar shapes I just have shown you after you take out area V4? What you find here is that the monkey initially, after you take out the area V4, can't even do the regular task, which is this one, right? So this is identical to this one in this case. And when you do that, he does very poorly to begin with and then gradually, over many, many days, improves a great deal. Then if you put a new figure in, then it takes a while for it to learn that, all right. Now then, what you can do, now I come to this business of having these same figures. But you can vary them now, so that you have to do a transposition to do an intermediate visual task. In this case, what you do is that you vary the size. In this case, it's identical just like it was. In this case, you see that this is smaller than this. But you say, oh, I have to go find the circle. I'm not looking for identity. I'm looking for something that's the same looking. And here we have even bigger case. This is a triangle. And so the monkey makes a [INAUDIBLE] to that one here to this one. So that has to do a test position in size. Another thing you can do that we talked about, you can vary the amount of contour information by doing this kind of occlusion. And the degree of occlusion you can vary by varying the spatial frequency of the display. And lastly, you can also decrease or increase the amount of contour information that you provide. Now if you do with this, you get a huge effect after V4 lesion. This was the normal condition with the varied object size. This is the occlusion. And here is the varied contour information. You can see that there is quite a dramatic loss, not a total, but quite a notable loss, in the monkey's ability to perform this task. And this is also reflected in the huge increase in the latencies with which the monkey can perform the task. So area V4 seems to play an important role in these intermediate visual capacities. And I come to some more examples of that in just a minute. Now, yet another important factor in analyzing the visual scene occurs when it is our task to find something out there that is less noticeable. Remember, we talked a little bit about using camouflage, in which case, you have to find something lesser to survive. Now in this case, what we can do here, do a similar experiment in a monkey. On the left side, the target, the one the monkey's supposed to select, has a much higher contrast than the distractors. And you can vary the degree of difference, but always the target is brighter than the others, so that it stands out. But then, you must be equally able, whether you're an animal or a human, to be able to pull out something that's lesser in the visuals field. And you have to be able to do this if you are going to survive in your environment. Now here then, the task is to go to this lesser stimulus, because it's the odd stimulus. So what you're extracting is the odd stimulus. Here it's easy to extract, and here it's difficult to extract. And so now the question is what happens in the visual cortex? What area plays a role in this? And so it was discovered that area V4 is very important for this. And let me explain this to you then. You do an experiment in which you remove area V4 and see what happens. And here what we have is when the star gets brighter, you vary the luminance difference. And you can see there's a mild deficit with the V4 lesion, highly significant but still fairly mild. On the other hand, when you make it dimmer, the monkey is practically staying at the probability. He cannot do the task at all. So somehow, V4 plays a very important role in being able to ferret out some subtle things in the environment, lesser things. All right, so this then brings us to yet another way of analyzing this. We can, in this case, go to the larger target, in this case to the smaller target. And you can recognize yourself that this is certainly easier than that. But a normal monkey can do both of these quite well, shown here. This is the normal monkey's performance, the target larger, here the target smaller. He does extremely well. But with a V4 lesion when the target is smaller, it's totally devastated, so as if area V4 were involved in the analysis of subtle things, things which are lesser then, rather than being reflex like and going to the brightest, biggest thing in the world. OK, so that then indeed highlights the fact that we have some of these areas, including V4 and of course, therefore MT, that involved in these much more subtle types of visual analyses that we need to perform. Now capitalizing on these kinds of subtle things that we are capable of doing, artists, in addition to the ones that I've shown you before, also created all kinds of percepts, I should say paintings, sketches, precepts, that cause confusion by playing around with these factors. One of these here is a very well known audience called Escher. This was so near the end of the 19th century. And here what we have is this is sort of a figure-ground confusion. Here what we have is a bunch of birds that fly to the right, and also a bunch of birds that fly to the left. And so it's confusing. It's alternating. You don't know which one is which. And it has to do with a very, very clever creation of figure-ground confusions. And actually, next time we talk about illusions, I will bring in some more of these kinds of curious effects. And then here's another one. And it's very hard for you to tell are the stairs going up? Are they going down? What's going on? It's the same kind of play with the paintings to create confusion in your perceptions. And then here is yet another one, where you don't know is water running up, or is the water running down here? That's again an interesting confusion that Escher has created. All right, so that then is the essence of what I wanted to cover today to highlight the fact that our ability to extract shape information in the world is absolutely incredible. And it has triggered not only experiments to try to understand that by scientists, but it has created a tremendous amount of artwork that played with these because it was so enjoyable. And certainly, the several paintings that I have shown you must have given you a pretty good sense of how artists capitalize on these limitations in ability to extract intermediate and high level vision capacities from the visual scene. All right, so to summarize them, first of all, I mentioned three major theories that have to do with form processing. The one, the orientation of line segments. And I pointed out to you that that theory is not particularly powerful. And then the one, the topographic theory, which turned out to be bordering on the ridiculous. And finally, Fourier analysis, which seems to have a lot of power, even though it also has created a lot of skeptics. All right, then I pointed out to that are the V2, V4, and inferotemporal cortex, play important roles in intermediate vision, these intermediate visual capacities we talked about in detail. Then I pointed out to you also that in V2, it was discovered that there are some neurons that respond to subjective contours, indicating that already in area V2, we can perform these incredibly higher level abilities to extract information when it is unclear in the visual scene. Then I pointed out to you that recognition of objects transformed in the various ways is compromised by V4 and inferotemporal lesions. And V4 lesions also produce major deficits in learning and selecting lesser stimuli, which is a very important attribute for us to be able not the respond in reflex-like fashion to what is most obvious out there, but to be able to extract the subtle things from the visual scene. Some inferotemporal neurons are selective for objects including faces. But most respond to a variety of objects whose recognition is based on the differential activity of a great many neurons. OK, so that then brings me to the last point, which is how we process and deal with ambiguities in perception, unfortunately still remains a mystery. And so there's a lot of space here for new investigators to come up with exciting, interesting new findings about how the brain performs these kinds of subtle analyses of the visual scene. OK, I think I'll leave this until next time. I'll talk about this next time. OK, does anybody have any questions? All right, I hope that you can take a little bit of time out, look at some of these artists on the internet, and look at Hirschfeld and the Ninas that I had shown you.
MIT_904_Sensory_Systems_Fall_2013
4_The_ON_and_OFF_channels.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, so now today, the topic we are going to discuss will be the ON and OFF channels in the retina. I will provide you with a brief overview again of what we have covered before about this, and then will go on and look at this in some depth. Now before I start on that, let me just very briefly mention again that the syllabus-- which all of you I presume have. If any of you doesn't have a syllabus-- any one of you doesn't have a syllabus, we can provide one for you. What I would like to encourage all of your again is that you do this so-called preparatory reading that appears for each session, which will make it easier for you to follow, and consequently, also memorize the basic facts that we are going to be dealing with. Alright, so therefore let me get started with the so-called ON and OFF channels. And what I would like to do first of all is to tell you what are the major questions we are going to pose in dealing with this very-- to me at least-- very interesting question of the ON and OFF channels. The first prime question you're going to ask is why did the ON and OFF channels evolve? What is their function? As I've already indicated before, it is rather complicated to have created these two channels, and therefore there must have been a great deal of evolutionary pressures to accomplish this. So that's the prime question we're going to ask. Now more specifically we're going to ask how the rods and the cones are involved in the creation of the ON and OFF channels. And then you're going to ask how do the ON and OFF channels contribute to the center and surround organization of retinal ganglion cells, because this has been a big issue among many investigators over the past 20 or so years. Then you're going to ask what role do these channels play in giving rise to the transforms that we had discussed the last time in visual cortex. And lastly we're going to ask what are the consequences of blocking the ON channel on neuronal activity and on perception. And this last question then will be successful in answering several of the questions that we pose as to why we have these curious arrangement of ON and OFF channels in the retina. All right. So now let me first of all talk about the neuronal responses-- sorry-- of the ON and OFF retinal ganglion cells. First of all, if you remember, I told you that back in the 19th century, Keffer Hartline was the first to record from single neurons by dissecting single fires in the optic nerve. And then what he did-- he shone light into the eye. And then subsequently-- especially when I talked the last time about Hubel and Wiesel-- what was done is to use reflected light. And most currently we using reflected light by presenting images on a computer monitor facing the animal or the human we are studying. Now when Keffer Hartline did this by shining light into the eye, he noticed that there are some says which are called ON cells, some cells that are OFF cells, and some cells which are both ON and OFF. Now his idea at that time was, using that method, that ON cells discharge when a stimulus came on in the visual field, and OFF cells signaled their termination. That was his idea, and that's why I call them ON and OFF cells. So subsequently, especially when reflected the light was used, a different view had emerged as to what might be the reason for having these kinds of cells. And this was further enhanced by the beautiful findings of Kuffler, when he noticed that there was center-surround organization in these cells at the so-called ON, OFF, and ON/OFF cells responded vigorously when a light was shown into the center, but responded less when the light shown into the surround. So then people began to do all kinds of experiments to try to study this in further detail, and one of them was to use reflected light, either by using a projector or by using a color monitor. And when they did that-- I'm going to show this to you schematically-- they found that if you had a certain lit background like we have here, you could present the stimulus either by virtue of light increment or by virtue of light decrement. And as I had mentioned to you before, the visual system is dramatically different from the other senses that we have in that we need to be able to see dark stimuli in a background, like letters on a page, as well as light incremental stimuli. And so this idea then, if it's selected that way schematically, looks like this. An ON cell will fire vigorously when a bright spot appears in the center, and will not fire to a dark spot. If anything, it'll be slightly inhibited. OK? By contrast, when you look at an OFF cell, you get inverse. It fires vigorously. If you would, it gets excited by a dark spot. It gets excited by the detriment in light illumination. Now if you did the same thing using a much larger spot of light-- that is shown here-- what you see here is that the center-surround antagonism that we talked about fully applies to this kind of stimulation conditions as well. Use a large spot, bright spot, or dark spot and you see that the over responses of the cells-- the responses are much more limited than when you confine the simulation to the center, only further emphasizing this important fact about center-surround antagonism-- this incredible increasing complexity of the organization of neurons that you see already in the retina. OK, so now we are going to next look at the anatomy of the ON and OFF ganglion cells. And as we look at these various ways of studying the ON and OFF channels, we are gradually going to understand what these two systems are really for. So the first thing is to use this method that I had also already described, namely that you can record intracellularly from a cell, understand its functional responses, and then you can inject a label, and then you can process that anatomically to see what it looks like. So when this was done, a very interesting new finding had emerged, which is depicted here. Here we have the photoreceptors-- just the tail end of them, if you will-- that connect to the bipolar cells. Two bipolar cells are shown. And then they project into the inner plexiform layer, which we often just refer to as IPL, which is subdivided into two parts-- A and B. And it was shown that the majority of cells arborize in either one or the other of these two sublamina in the inner plexiform layer. But there were also some cells, which is not shown here, that had arborization in both of these lamina, and that's to anticipate actually the property of those cells, which are called the ON/OFF cells. Now furthermore, it was found-- because these cells were labeled after recording-- that the cells that arborized in subliminal A, or the inner plexiform layer, were so-called OFF cells, and those that arborized in subliminal B, like this one here, and this one here, and this one here, were ON cells. OK? So there was a distinct spatial segregation into the lamina of those retinal ganglion cells that were ON and OFF, and also furthermore, those that were ON/OFF because they had arborization in both of them. So that was the basic finding. Now when that was discovered, people went on and asked the question, what is the spacial arrangement of these cells on the retinal surface, if you will. And so experiments were done in which cells were labeled, in most cases with the Golgi stain, and then the retina was looked at head on with a microscope. Now the interesting thing about a microscope is, which all of you I'm sure know, is that the higher the magnification that you use, the shorter the focus of the microscope. And in fact, what you can do here is when you look at his head I like that, if you can focus either in sublamina A or sublamina B, and if you do that, since we had gotten these results, you know that whatever focuses into sublamina A would be OFF cells, and whatever comes into focus and the [INAUDIBLE] is coming to focus in B, would be the ON cells. So then what was done by a fellow called Heinz Wesley is he asked a question, what is the spacial arrangement of these two? Do they respect each other, or what? And so when he did that, here is a good example of what that looks like. Here we have labeled both the ON and OFF cells using the two depth of [INAUDIBLE], and you can see they're kind of helter skelter. Then if he focus deeper, he focus only in the ON cells. It looks like a they're very, very nicely arranged, and that's also true for the OFF cells. And they respect each other. Looking at the dendritic arbors, there's a little overlap among the dendrites themselves of the cells. So what's this then said is that the ON cells respect each other but don't seem to give a damn about the OFF cells. And the reverse applied to the OFF cells. OK? So as if there were two independent entities. So that was a very interesting finding. And so that was one of the initial cues saying that these ON and OFF cells do different things, and each is a devoted entity to what it does. All right so now to understand this even better, let's go back and take a brief overview of the retinal connections so we can understand how these systems came about. And the first thing to do is to talk about the photoreceptor basics. And I told you many times-- by now I'm sure all of you know that already-- that all photoreceptors hyperpolarize to light. Then I also told you when photoreceptors depolarize-- that's when they release their neurotransmitter, which is glutamate. And that glutamate acts on both the horizontal cells and the bipolar cells. And then furthermore, just to look in the details-- this you don't need to remember exactly in these words-- photon absorption by the photopigment results in isomerization of the chromophore from 11-cis to all-trans. To all-trans this causes hyperpolarization, thereby reducing neurotransmitter release. But what you do need to remember is the next step, which says the same thing in much simpler words. It says that the two classes of bipolars, the ON and the OFF, synaptic junction of the OFF bipolars is sign conserving, and that of the ON bipolars is sign inverting. And so as I've told you the last time, as you hyperpolarize and depolarize the photoreceptor, an OFF bipolar cell does the same thing like this, but the ON bipolar does the opposite, OK? So you created a two-ended system-- a double-ended system, if you will-- from a single-ended one that you have in the photoreceptors. Now, yet another fact that is important to remember is that the ON bipolar cell receptor is mGluR6. That is a specific molecule that exists in most animals, only in the retina, created anew in the course of evolution, indicating what considerable pressure there had to be able to have an ON system. And lastly I should add here also that the OFF bipolar receptors are mGlueR1 and 2 and they follow, pretty much, the way the photoreceptors respond, which means that their activation leads to the opening of channels, causing depolarization. And of course you all know that-- I should restate maybe once more-- that when neurons in the retina that had greater potentials, when they depolarize, neurotransmitters released. When they hyperpolarize, they do not release them. In anything, they stop releasing them. OK. So now let's look at this wiring again in more graphic detail. Here we have a cone. And the fact is that in central retina, each cone connects with at least two bipolar cells. So in other words, in the retina there are many, many more bipolars than there are photoreceptors, which is amazing if you think about the numbers. I don't know if you remember the numbers. I told you there are more than 50 million cones, and maybe 150 million rods, in the retina. So the retina is an unbelievably complex structure with millions and millions and millions of cells. All right. So what happens then-- you create these ON and OFF bipolar cells. The OFF ones, by virtue are sign conserving synapse. The OFF, by virtue, are sign conserving synapse. And they connect with two basic classes of ganglion cells-- the ON and the OFF. So the ON and OFF signals, in large part, then are sent separately, especially to the [INAUDIBLE] nucleus. But let's not forget that they also have the so-called ON/OFF cells. And many of them actually end up projecting the colliculus, as we'll talk about later on. So this is a very basic wiring. They can then add here, as I've already said. This is sign inverting synapse. This is sign-conserving sign conserving synapse. And then we can add the fact that we have horizontal cells that collect signals going sideways on the retinal surface, thereby creating-- hypothetically speaking, at least-- the surround effect, although that we will look at in just a minute in more detail because some people had proposed-- many hypotheses had emerged why we have ON and OFF channels-- and a lot of the hypotheses was to create this famous center-surround antagonism that you see in the retinal ganglion cells. I'll have a model for that in a minute. OK, so here we have the ON and OFF systems. And again I want to point out to you that they terminate in different sublamina, the inner plexiform layer-- the ON in sublamina B and the OFF in sublamina A, whereas the ON/OFF system looks something like this. It has dendritic arbors. They are ON/OFF ganglion cell in both layers, and connects to both the ON and the OFF like that. So that's how you create these three types of cells at the very simple level. Now the issue that I just to referred to is how is this surround mechanism created. One hypothesis is that it's created by virtue of the horizontal cells predominantly, as shown in this figure, which is pretty much what you had seen before. So the surround is created by the horizontal cell network. An alternative hypothesis that was for a while popular was that actually the reason we have ON or OFF channels is to create the surround antagonism in retinal ganglion cells by using this kind of wiring arrangement. I'm not going to go into details about this because to anticipate what I'm going to tell you is that this model does not seem to be correct, and that of course is often the case. We have many, many different models and then one of our tasks is to try to figure out which one is correct. And you know what often happens is that if three models are correct, well there's a fourth one you didn't even think about. So that sometimes happens, and especially when you study the brain. All right. So now we are going to move on. This is, unless you had read the preparatory material, is something that may puzzle you-- the effects of APB on the responses of neurons in the visual system. So now I'm going to ask you guys as to how many of you actually know what APB is. Oh, I see all these hands raised. Oh my goodness. All right. So nobody knows what the APB is. That's good. So that way you are going to learn something brand new. But if you get a chance and read the preparatory material, then it will be easier for to remember these many, many facts that I'm going to try to impart on you with each lecture. All right, so APB. APB stands for an artificial molecule called 2-amino-4-phosphonobutyric acid. And to show that to you in some detail, it is shown here-- 2-amino-4-phosphonobuterate . Butyrate-- "ate," as you know, stands for acid. OK? So this molecule, as I mentioned to you before, was invented, if you will, by Watkins and Evans in England, who are molecular biologists. And their game, as I mentioned, is to create new molecules-- and many of which are either analogs or antagonists of neurotransmitters. Now this is a glutamate neurotransmitter that is discharged by many, many cells in the brain, including your photoreceptors. So this is a variant of that. Now once this molecule was invented, or created, I should say perhaps, people began to ask the question, well what is this variant? What can they do for us? And that was the basic game they all played. You create molecule after molecule after molecule, and then you test them to see whether they can tell you something useful in studying the brain. And it turns out that there's-- I've mentioned before also-- most of those molecules you can throw into the waste paper basket. But the few of them come out and become very, very useful. And this one here-- that's the reason I'm mentioning it to you-- this APB turned out to be a magic bullet. First of all-- you should remember this-- that this a neurotransmitter analog. So what happens is in a way it acts like glutamate, but it acts really specifically on the neurotransmitter sites of the ON bipolar cells. And what it does-- it fills the receptor site, which then makes those neurons insensitive to subsequent light stimulation. All right? So this then is a remarkable molecule. And consequently, people began to do all sorts of experiments. And it was discovered that when this substance was applied to the retina, indeed shining light into the receptor fields of cells failed to elicit a response. But by contrast, when as part of dark stimulus was present in the receptor field, the cells continued to fire. So that was a basic finding, and so a method was developed to study this in the monkey. And let me show you what the method is so you can get a sense of what it's like to do experiments like this. So the method was to put two tubes into the eye of an anesthetized, paralyzed animal, through one of which you could infuse a substance, and through the other which the substance could exit. Because if there were no exiting and you fuse something in here, the eye would swell up, so to speak, and it would kill the cells in the eye. So you got to circulate it. So what you have to do also is to take the vitreous humor and make sure that-- you know that's a jelly-like substance-- to make it so that it was fluid enough to come out of this tube here, as well. So then you could present to the eye either a solution that had no effect on anything, because you made that solution with enough molecular substances similar to what's in the vitreous already. OK? And then what you could do is you could switch over using this same substance and add a little bit of APB. Now APB is a very powerful substance so you need only small amounts of it to have an effect. And then what you did is you recorded from either the lateral geniculate nucleus, in this case, or from the optic nerve fibers, or from the visual cortex. And so now you have this magic bullet, and first, however, you want to determine, is it true. What has been found originally in the mutt puppy-- is it also true in the monkey? If you inject this APB into the eye of a monkey, do you really stop the responses of ON-center cells to view light visual stimulation, light incremental stimulation. And the answer is lucky B-- because otherwise I wouldn't be telling you this, of course-- is that the answer is yes. So here is an example of taking a response. These are cumulative histograms, OK? You've turned the light on, and then turned it off. And you can see here's an ON-center cell responds vigorously under normal conditions. You put APB into the eye, the cell stops responding. And then when the animal recovers from this, you wash it out, so to speak, then the response returns. By contrast, we do same experiment in an OFF-center cell, shown on the right. You see that the cell continues to respond when you put APB into the eye. So this is indeed a magic bullet, which selectively blocks the ON system, and has no significant effect on the OFF system. So this then opened the gates to study what happens when you do these kinds of experiments-- in this case, in monkeys-- to determine what might be the function of these two systems. And so this then enabled one to test the various hypotheses, which I had mentioned to you briefly before. The first hypothesis you can ask, is it true-- as that model I've shown you the right-- that it is interaction between the ON or OFF systems that gives rise to the center-surround antagonism of cells in the retina. That's one thing you could test. So to assess that, let me show you what the experimental procedure is. What you do here-- here is a receptive field of a cell. And in this case, this cell-- the recording is taking place in the geniculate-- and this particular cell is one that is color selective. It responds best to a small red spot. And if you hit the surround with a different color, the cell is inhibited. So the way this is then done is you shine the light on the center, then you shine the light on the surround, and then you're in the OFF cycle-- you just hit the surround alone. So this then shown to you here over time. Here's a center stimulation. Turn it on. Then you hit the surround. Then you turn it off. So because-- and the center stays on, and during the OFF cycle, you hit the surround. So this shows the exact manner in which you turn these on and off. All right. So now the big question is, how do ON- and OFF-center cells respond to this stimulus arrangement. So say you have an ON cell, which would discharge to this. Then the response should decrease because of the surround inhibition. And you'll see the normal response. And the question now-- what's going to happen if you put in the APB? Well there are a number of possibilities. One is that nothing's going to happen. That's unlikely because I already showed you that APB blocks the ON response of the center. The other possibility is that you lose the center-surround antagonism, which would prove that the ON and OFF systems play a significant role in creating center-surround antagonism. All right. So those are some of the possibilities. And of course a third possibility is that the OFF cells also lose their surround antagonism, although their responses to the center are unaffected. So let's take a look at that. And here are the data under normal conditions. This again is the stimulation arrangement lined up, but you can see here there's a huge response when you hit the center with a red spot, and then when you hit the surround there's a huge inhibition. When you remove that, the response returns. And then during the OFF cycle, there's some spontaneous activity, but the surround activation even stops that. So there's very strong center-surround antagonism. All right. So now I want you to guess for a minute. What do you think is going to happen when we inject APB into the eye? Think about it for a minute and I'll tell you. I listed some of the alternatives, and you can have a hypothesis of your own of what you think might happen. All right let me show you what really happened, OK? When this happened, actually, the experimenter thought, my god, we lost a cell while we were recording. But then, subsequently when the APB was washed out, you got the same response as before. And so the cell wasn't lost, but the APB was so effective that it stopped the response of the cell completely, both of the center and to the surround. And now let's ask the question, what happens when you do this, not to an ON-center cell, but an OFF-center cell. Think about it again for a minute and ask what do you think would happen then. OK, so here it is. Here is an OFF-center ganglion cell. And it shows here, using the same stimulus conditions, that in this case, because there's an OFF cell, adding the surround decreases the degree of inhibition turning OFF cycle. And here again the same thing is happening. But the same thing happens under both normal conditions and after APB had been injected. So what this experiment then shows-- and of course one does this with many, many cells-- is to prove that the hypothesis, that the ON and OFF systems play a significant role in center-surround antagonism, is wrong. And because of that, that hypothesis has been eliminated. All right. So then we can move on and ask the next question. I mean, let me first draw this up again here. All right. Here we have the photoreceptors-- the cone photoreceptors-- the horizontal cells, and the ON and OFF cells. So this alternative hypothesis, that the surround inhibition is due to horizontal cells, therefore, has gained much greater acceptance. OK. So now we are going to move upstairs and we're going to go to area V1 where we can ask the question, well what were the transforms that we had seen in V1. I presume all of you remember. What are the three major transforms we talked about? Who remembers? Come on. Sure you remember that, don't you-- that we have cells-- most of the cells are orientation specific, right? Most of the cells are direction selective. And most of the cells are selective for spatial frequency. So those are three of the major transforms. The other two we talked about is a convergence of the ON and OFF channels onto single cells in the cortex. And the other was that there was a binocular input to many of the cells. So those were the major transforms we talked about the last time. And again I want to emphasize the importance of this by telling you that Hubel received the Nobel Prize for those discoveries of how V1 operates. OK? And of course I mentioned before that Keffer Hartline, when he discovered the ON and OFF systems using light that he shown into the eye-- he called them the ON and OFF cells-- he received the Nobel Prize as well. So those are truly, truly major, major discoveries and have triggered hundreds and hundreds of experiments trying to understand better. And certainly one of the questions that was raised is how does orientation, and direction cell activity, and special frequencies activity, for that matter, arise in area of V1. And one prominent hypotheses was indeed that the reason we have ON and OFF channels is to create these transforms in the visual cortex. So what we can then do, now that one has this magic bullet of the APB, that one can record from a cortical cell. See how it responds when you inject APB in contrast to before you injected it and after injected it. So that's the big question you're going to ask next. All right. So here is an example of what of cortical cell's response looks like using this kind of histogram when a bar of light, in this case, is moved across the receptive field. Now this is a complex cell. And you can see it gives a vigorous response when the light edge goes across, and then it gives a vigorous response when the dark edge goes across. I'm showing this only for one direction of motion. So that is a natural response. So now the question comes up, what happens when you inject APB into the eye. A number of possibilities exist. The most basic one I think most of you would buy is that the light edge response is produced predominantly by the ON channel, and the dark edge response by the OFF channel. And if that is the case, then this response should be eliminated by blocking it with APB in the eye. And if you do that, that's exactly what happens. You can see here under normal conditions, you get a vigorous response for both light and dark edges. After APB, only a dark edge response remains. So this then established that these responses to the different edges-- the light edges and dark edges-- are a product of the input from the retina to the geniculate, and then to the cortex from the ON and the OFF channels, which then converge in many cortical cells, as is the case in this particular cortical cell. So that then establishes this very basic fact. Now the next question we can ask-- what about those transforms you talked about? Let's first of all look at the transform of direction selectivity. So we can take a cell that is, in this case, directionally biased. And you can see what the response is before and after APB is injected. And what you see here-- we move the bar across in one direction, and then we move the bar back across the opposite direction. So the first half shows a downward movement-- the second half, the upward movement. Now what you see here that the cell responds is much more vigorous. It's not 100% direction selective. I showed you some of those the last time. This one is a bias-- about a four to one bias-- much more response to the downward movement than the upward movement. But the cell responds both the light and the dark edge in both directions, and it lines up with the temporal arrangement here, proving indeed that this is a complex cell. Now then when you inject APB into the eye, look what happens. You eliminate the light edge response here and here, but the cell is still directionally biased. So direction specificity was maintained. And when this was studied in many, many, many cells-- even cells that were 100% direction specific-- blocking the ON channel did not eliminate directionality in the cortex. So that then indicates that the ON and OFF channels did not arise to bring about direction selectivity in the cortex. And so that brings us to the second transform, which is orientation specificity. So we can do that next. And here we have an example using a very similar procedure. Again, a complex cell when the APB is injected. The light edge response disappears. You move the bar across the different orientations and this is the orientation specificity you get. Now this is calculated here on the basis of only the dark edge response because you want to keep it the same as you're going to do here. Since the light edge response is eliminated, you can see that the cells orientation specificity is also unaffected, meaning that the orientation specificity of the cortical cells is not due to the interaction between the ON and OFF channel. Similar experiments were also done with spatial frequency selectivity, and again, no effect was found. So this then led to the conclusion that the ON and OFF channels did not arise for the purpose of creating the transforms that we had denoted in our last lecture about single cells in the visual cortex. All right. So now what we can do is here-- we can come up with a silly model, just to make it memorable, which is to say that the ON and OFF channels flow into the cortex. And these attributes we talked about-- orientation, direction, and spacial frequencies selectivity-- are produced by cortical filters. What you have in the cortex are all kinds of inhibitory interneurons and the interactions among many, many neurons. And that kind of activity that you get there is the one that produces these attributes, not the interaction between the ON and OFF channels that flow into the visual cortex. All right. So now the next question we can ask, or what happens on APB under photopic viewing conditions. We'll talk about scotopic, as well as photopic, vision. To do this, and to answer how it affects our visual capacities, what we do is to turn to behavioral experiments. In behavioral experiments, what you can do is you can present various kinds of stimuli. You can train a monkey-- in this case, with monkeys-- to see how well they perceive a stimulus, and ask them to make an eye movement to that stimulus. So when you do this-- testing that hypothesis I've shown you earlier about light increment and light decrement-- what we can do is to train a monkey to first fixate. And then after the monkey fixated, you can present either a light incremental spot or a light decremental spot. The monkey's task is to make a saccade to that target. And of course, those spots will appear randomly in several different locations in each trial, so the monkey doesn't know where he's going to appear. And if he makes a correct saccade to it, he will get a drop of apple juice for a reward. So that is the procedure. And so now the question then comes up-- how well does the monkey do in being able to detect light increment and light decrement? So if you do that, luckily one gets a very clear cut and dramatic result. Here is the section that I will show you data for light increment, and here is a section showing data for light decrement. So here is an example. What you measure here are two things-- you measure the monkey's percent correct performance, and you measure what the saccadic latencies are to make a saccade to the target. So it shows here that under normal conditions, the monkey's performances is over 90%, and that he has a very nice distribution of saccadic latencies with a mean of about two in 53 milliseconds. Then when you apply APB to the eye, what happens is that the monkey's performance drops dramatically. He is just a little bit above probability, and his latencies are very, very late-- very, very, very long-- 406 milliseconds. So in other words, the monkey can barely perceive a light incremental stimulus by virtue of having blocked the ON channel in the retina by APB. Now if you do the same experiment with light decrement, what you see here is the monkey's performance remains normal-- about 95% correct. And the latency distribution is also about the same-- two in 49, versus two in 51 milliseconds. So that then says that indeed the ability to detect light incremental stimuli has been devastated by the injection of APB, and therefore raised the idea that the ON and OFF channels are for the purpose of quickly and efficiently detecting light increment, as well as light decrement. So that then was the basic finding that was obtained with this behavioral experiment studying the monkey's performance on being able to process visual stimuli. Now we are going to get a bit more complicated, because what we talked about so far was under photopic conditions, meaning when your cones were fully functional. So the question is what happens under scotopic conditions, meaning what happens when you're dark adapted. Now we have talked about this before. It was pointed out, initially by Schultze, that we have two basic classes of photoreceptors-- the rods and the cones-- and that the rods are for night vision. So now the question is, what happens under night vision conditions. And when this was found-- this was truly, truly baffling. And I'm going to show you the data. It showed here under light adapted conditions with the same kind of data-- but shown only as histograms in this case-- the monkey does equally well for light and dark. And when you apply APB, the monkey has difficulty seeing the light incremental stimulus, but has little loss in the light decremental stimulus. OK? But when you do the same thing under dark adapted conditions, when only the rods are operative, a curious thing happens. The monkey doesn't see either light increment or light decrement. In other words, and the way to put it shortly, is to say that the monkey has become night blind. Now there are, among humans, a small population of individuals individuals who are night blind, and what's the situation with them? It's called "night blind" because they just see very poorly at night. And, in fact, when that is known, they only get a special kind of driver's license-- one saying you can drive in the daytime, but you can't drive at night. So now we have a monkey here with APB injected-- that he's night blind. And also you see the same data reflected in the latencies, just like before. And it shows that the monkey is devastated both for light increment and light decrement by having a tremendous increase-- more than a tripling of latencies for the few trials that he did carry out correctly. So now this being the case, it raises a big question-- and I'm sorry that things are getting so complicated, but the brain is complicated-- as to just what is the nature and arrangement of the rods in the retina, and how does that relate to the arrangement of the cones? OK? So that brings me to an interesting point. Again, as so often happens, whenever you're confronted with a puzzle, you have all kinds of hypotheses. And then luckily if you do this instead of experiments, you come up with the right kinds of tools. You can eliminate most of them and come up with the right one. Now, so we can ask, first of all, without talking about APB at all now, simply ask a general question-- what happens to the receptive fields, or retinal ganglion cells, under dark adapted conditions? And what happens is that the receptive fields become larger. Now this initially-- discovered by Barlow in England-- was negated by the other individuals saying, you just got scattering of light-- that's why it's larger. So it was kind of dismissed, but in the end it turned that he was right, as you shall see. The other thing that happened is that the color selectivity response disappears in the retinal ganglion cells. And not only that, but it disappears behaviorally. So when you have a red rose, and you look at a red rose at night when only your rods are functional, it looks like a black rose. The rods cannot process color. OK? They can only process light increment and decrement. OK, again just to reiterate-- the receptive fields become larger. OK. So the big question is what on earth is happening, what kind of wiring is taking place, or what kind of connections are created to create these two things. Now the second one is the one that's the big puzzle. The first one we can understand because it's been shown that rods are only one type and that they don't carry color information. Now to study this further, way back when in the 1870s-- 1880s, Cajal did the large number of experiments of this sort. If you remember, I told you that Cajal played a central role in using the Golgi stain, and that he and Golgi received the Nobel Prize in 1906 for their major discoveries about this. So now he argued that the way this happens is that rods and the cones connect differently with the ganglion cells. And so here is a picture of Cajal. He spent most of his life like this, looking through a microscope, endlessly looking at brain slices, and studying the individual cells-- the shapes and drew them, as you can see his pencil. He is in the middle drawing, actually, a cell there. And so he speculated about this a great deal. And he was just a remarkable person. And he had written an autobiography. And the way he speaks in this-- translated from Spanish-- but he has an incredibly colorful way of talking-- "had," I guess I should say. And I want to quote some of this, because I think you'll find it interesting and amusing. So he said, "Since this impression received by the rods is different from that taken by the cone, it is necessary from every point of view that each of these specific impressions should be conveyed through the retina by a separate channel." The translation leaves a lot to be desired, but that's OK. So then he went on and said, "When we reason with common sense and lift a war club determined upon vigorous action, nature ultimately hears us." Very picturesque, as if nature had heard anyone, right? "Knowing what I was looking for--" meaning he had a hypothetical bias, right-- "I began to explore eagerly retina of fishes and mammals. Finally as a reward of my faith, there deigned to appear most clearly and brilliantly those two types of bipolar cells demanded by theory and guessed by reason." What he means by this is that there were two kinds of basic kinds of bipolar cells-- and not the ON and OFF. But in this case, what he was talking about, he was talking about rod bipolars and cone bipolars. He said the rod photoreceptors connect with rod bipolars, and the cone photoreceptors with cone bipolars. So that's what [INAUDIBLE] then established, and that finding was basically correct. But then he went on to say that these two kinds of bipolars hook up with two types of ganglion cells, thereby forming separate channels to the brain. So that was an interesting conclusion, and for many years that was accepted. And interestingly enough, as it so often happens with hypotheses, this conclusion-- even though you had the drawings looking through the microscope showing that there were these separate channels-- turned out to be largely not one-- almost entirely wrong. Now what do we mean by that? Well people began to do all sorts of very careful intercellular recordings in the retina, and using much more sophisticated anatomical procedures than Cajal was able to use, thanks to the new developments in the field. And they discovered, first of all-- let me back up a second. This is sort of the model that he had proposed. He had the cone model and the rod model, not distinguishing between the ON and OFF. And so the model that then subsequently emerged was that ganglion cells in most of the retina actually receive a convergent input- except in the fovia, because you don't have rods there-- so that you don't have a doubling of ganglion cells-- some for rods and some for cones-- but both the rods and the cones, through separate pathways, feed into the ganglion cell. So then this was analyzed in much, much more detail. And now I'm going to point this out to you by providing you with an overview of the retinal connections. OK. So here we have the three basic classes of cones. All right? And we have the rods. All right? Then, if you look at the cones, each cone gives rise to at least two bipolar cell-- an ON and OFF bipolar cell. But the rods in most species only have a single kind of bipolar cell, and they can refer to that as an ON bipolar cell, because the synapses are all sign-inverting synapses using the mGluR6 neurotransmitter receptor site. So that's what we have there. Now if you look further down here at the ganglion cells-- and what we have is of course, that we have the ON and OFF bipolar cells from the cones feed into the ON and OFF ganglion cells. So now the questions are what happens to the ON bipolar cells? How do they connect to these cells since they do not form a separate pathway? And this has been again-- things got very complicated in the course of evolution where it was very important to try to conserve things as much as possible, and that's why we do not have separate pathways for rods and cones. If we did we'd need a huge eye, and we'd need at least twice as many ganglion cells than we have at the present time, which is about a million in each eye. OK, so what really happened is-- dumb-founding almost-- is that a so-called amacrine cell -- I told you there are different classes of amacrine cells-- one of them is a so-called A2 amacrine cell. That is a cell that receives a direct input from the ON bipolar cells. Now that amacrine cell connects to ganglion cells in two different ways. It makes a gap junction connection with the ON bipolar cells, and it makes a glycinergic synaptic connection with the OFF ganglion cell. So let's label this. What we have here-- this is inner plexiform layer again, and here is the outer plexiform layer. And what we have here-- just to remind you, that already know-- we have a sign-inverting synapse for the ON, and sign-conserving for the OFF bipolars that connect with the cones. And now we have here-- let's proceed to the A2 amacrine cell that is feeding into these cells from the rods. And what we have here is a glycinergic synapse, so-called. That's a real synapse. It's inhibitory. And then here we have what is called a gap junction. I'm sure all of you know what these things are. Gap junction is what you call an electrical synapse. So they transform the signal without any neurotransmitters involved. So it directly activates-- in this case, this fiber here-- and drives it. So what you create in the inner retina then here is a double ended system for the rods that is created in the outer retina for the cones. So you create [INAUDIBLE]. Here are the OFF, and here are the ON inputs to the ON and OFF ganglion cells. Now because of this arrangement, what happens is that the size of the receptive fields under dark adaptive conditions is bigger because a bigger range of connections from the rods to the ganglion cells. So all this is nice wiring that finally, after many, many years of experiments, has been clarified. Sorry it is so complicated. That's just how it is. Explains then the claim that you have bigger receptive fields at night than in the daytime. And now that has been generally accepted. OK so now the central conclusions that we come to is that ON and OFF channels have emerged in the course of evolution to enable organisms to process light both incrementally and decrementally for being able to see things rapidly and quickly. All right? So that is the prime function. Nature has gone to incredible extent to modify the basic organization of the visual system and the retina to create a way to be able to process both light increment and decrement rapidly. Now that may provide you with a little cute mnemonic, if you will. Here we have a fish. Fish also, believe it or not, have ON and OFF channels. Now what happens is if below here there is a predator-- a large fish that is trying to catch this fish-- because of the sun shining on it, it reflects the light, and so this fish would see this particular predator by virtue of light increment. Got it? By contrast, if you have osprey up here that's trying to catch this fish, that osprey against the sky would be seen as a dark object. OK? And so now, since this fish has both ON and OFF cells in its retina, it's going to do this. You ready? He's going to escape, OK? So that then, in a nutshell, tells us why we have ON and OFF channels in the visual system-- namely to enable us to process both light increment and light decrement effectively. And as I've mentioned to you before, whenever you read or write, you see things mostly by virtue of your OFF system because you have dark print or a dark pen on a light background. But of course the obverse is also the case. So this was known for ages and ages. And that's why eventually, instead of having print in which you have a black page with white letters, you now a white page with black letters, because that's much more economical to achieve. And you can read both light incremental and light decremental letters equally well because we have these ON and OFF channels. OK. So on the basis of this then, we are ready to summarize why we have this remarkable duality in the retina, originating in the ON and OFF channels, whose circuits we by now, I presume, you understand pretty well. So just to reiterate for the umpteenth time, which is easy to remember by now, all photoreceptors hyperpolarize to light. Secondly, the cone driven ON and OFF channels originate at the level of the retinal bipolar cells. OK? For ON bipolars you have a system, which involves sign-inverting synapses. OK? The neuron transmitted by the way of the photoreceptors is glutamate. And you come to the receptor sites-- the mGluR6 receptor is the one that inverts the signal for the ON bipolar cells. And the mGluR1 and 2 is the neurotransmitter receptor site for the OFF bipolar cells. Now APB-- can important to remember this-- is that glutamate analog. A lot of people make a mistake and think it's an antagonist. It's a glutamate analog. So what happens is that it blocks the ON bipolar. Now let me say just a couple more words about this. We move our eyes about three times a second. Every time you move your eye some mechanisms in the retina wipe the slate clean, so to speak, because if it didn't, then the image that would fall on the retina after you moved your eye would interfere with the image that had fallen on the eye beforehand. So there's an incredibly rapid system that breaks down the molecular arrangement of glutamate and many other transmitters in the retina so that with each movement of the eye, you can see things clearly. Now there is an exception to that, of course, under extreme conditions. We have what is called "after images," like when you look at the sun for a while. Then you will see an after image lingering on in your eye. But under normal illumination conditions, the slate is wiped clean with every shift in the eye. Quite amazing. All right. Then APB blocks the ON response of retinal ganglion cells. The OFF response and center-surround antagonism are unaffected. APB blocks lighted responses in the cortex, but has no effect on orientation, direction, and spatial frequency selectivity. APB reduces the sensitivity for light increment, and the ON and OFF channels for rods arise in the inner retina. And I should add here that there's only one kind of rod bipolar, which is the ON-type. There are no OFF-type rod biopolars. Then in most primates there are only ON rod bipolars. The rod ON and OFF channels are created in the inner retina by the amacrine cells, predominately by the so-called A2 amacrine cell, as I told you. Lastly, excitatory signals are generated for both light increment and for light decrement by virtue of the ON and OFF channels. So those are the major conclusions that we need to make about this remarkable achievement of how the visual system works, by virtue of having created the ON and the OFF channels, by virtue of having then done this, by virtue of creating from a single-ended system of the photoreceptors, a double-ended system at the level of the bipolar cells. So that then is-- sorry how complicated it is-- the basic layout of the retina and the ON and OFF channels. Next time we are going to talk about yet another subdivision in retinal ganglion cells-- the so-called midget and parasol cells. I think I've mentioned to you before that we have several different classes of retinal ganglion cells-- not just ON and OFF. And the so-called midget and parasol cells have ON and OFF subdivisions. And then there are several other cells that do all kinds of other things. We will talk about some of those. But the overwhelming, largest number of cells in the retina are the midget and the parasol, and so they deserve the greatest scrutiny as to why we have those two systems. And that's we are going to discuss the next time. So that pretty well finishes what I had to cover today, and so if any of you have any questions, I will be happy to try to answer them for you, as long as it pertains, of course, to the ON and OFF channels. Anybody not clear on the rules of how APB works-- that it's an analog, that it fills the receptor sites of the ON bipolar cells, rendering them insensitive to subsequent light stimulation? Now I should add maybe one more thing here is that unlike the glutamate, which is broken down in milliseconds, this artificial substance-- APB-- there's no natural optic mechanism for it. And so it lingers on, and that's why it is so effective. And it has to be washed out of the eye-- you typically have to wait 10-- 15 minutes for the eye to return to normal because there's no rapid optic mechanism for the APB, in contrast to the glutamate. Well it sounds like I was reasonably clear. I hope you can absorb this and marvel at the inventiveness, if you will, of evolution to have created this incredible mechanisms that we see in the retina. It's just dumb-founding. Yes, it's just incredible. So that then is what we going to finish with today. And so I will see you Wednesday, and we talk about the midget and parasol systems, which is also very interesting, by the way. And I hope you've enjoyed hearing about what those two systems are for.
MIT_904_Sensory_Systems_Fall_2013
13_Review_The_visual_and_oculomotor_systems.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right so today, we're going to have a review of the visual and oculomotor systems that I've covered so far. And what I'm going to do is I'm going to go over many, many basic facts in a rather quick fashion, which will sort of refresh your memory of what we have covered so far and also will make you more aware of what you want to look at carefully when you look at the material on the website, on Stellar, and also when you read the assigned readings. I want to remind all of you again that you will have to put together that paper on the accessory optic system that I will mention very briefly at the end of the review. And your prime task there will be-- that's an old paper published in the 1960s, which was a major discovery at the time-- and your task will be predominantly to say what-- well, first of all what has been discovered there, that you can cover in a paragraph and then to add to it what people have contributed to the study of that area since that original paper. All right, so anyway then I will mention at the end or so a bit more about the exam, which is going to take place on Wednesday right in here, which is going to consist of multiple choice questions. All right, so to begin with then, let's talk about the basic wiring of the visual system that we have covered. And that is outlined here for the primate and for the human. And I should mention as I had gone in the initial lecture that this is different from many of the lower level species in which the two eyes look sideways, and each eye sends all of its retinal ganglion cell axons across to the other hemisphere in the brain. Now this big change occurred when the eyes move to the front. And we have discussed already why that may have happened. And as a result of this, if you imagine cutting your vertically in half, you divide it into the nasal and temporal hemi retinae. And it so happens that the nasal hemi retina of one eye and the temporal retina of the other eye goes to one side, and the obvious happens to the other side. The connections are made to several areas, most notably for our purposes, was the lateral geniculate nucleus but also the superior colliculus and several other structures that we have talked about that include the accessory optic system. Once the connections come up to the cortex, several cortical areas-- we'll talk about that in a minute-- have evolved that are involved in progressively higher levels of visual analysis. Now this circle here-- hopefully you guys remember-- is either called the Vieth-Muller circle, or it's called the horopter. And it was shown by a clever experimentalist that if you put any spot along that circle when the person fixes at this point in the circle, all of those points impinge on corresponding points in the two retinae. However, if there's an object that seemed to be beyond or closer to the eye than the circle, then they hit non equivalent points in the retina. And that non equivalency is actually used for depth perception, as we have discussed, and I will mention again when it comes to a stereopsis. So that is the very, very basic wiring arrangement. And then if you proceed from here and look at the retina and the lateral geniculate nucleus in a bit more detail, first starting with the retina. I wanted to point out to you first of all that there are two different kinds of photoreceptors. You all know this very well by now. You knew this before you came to class. You have the rods and the cones. There are three basic types of cones, red, green, and blue, which more appropriately refer to short, medium, and long wavelength selective cones. And then we have the rods. Now what happens is that the light comes in, in this case from the bottom if you look at yourself. The light comes in, and it goes through many of the cells in the retina. And it impinges on the receptors, which are facing away from the light against the pigment epithelium. And as I've mentioned to you, there have been some interesting questions as to why this strange arrangement had emerged. Nobody had predicted that before we had any anatomy, people just thought if there were any receptors, they would face the light. So that's certainly a strange arrangement, an unusual one. And a lot of speculation have been advanced as to why this has happened. And I will just briefly mention two of those. One is that when these photoreceptors are right against the pigment epithelium, which in diurnal animals is black, and absorbs the photons thereby preventing scattering of light. As a result of which, you can gain high acuity. And that is known by the fact that if you assess the visual capacities of albinos who don't have a black pigment epithelium because they lack pigment-- that's the definition of being an albino-- most people have very poor vision because the photons come into the eye, and they scatter all over the place and activate many photoreceptors rather than just those which the incoming photon would hit directly. So that is the arrangement for these. Another factor, which I don't think I may not have mentioned is that what you have in these photoreceptors are little packets, if you talk about the rods in particular, you have little packets, each of which has the molecules, which are sensitive to the incoming light. And each of these packets-- there are about 1,000 packets in each of these rods. And each of those 1,000 packets is about 10,000 molecules. So we're talking about gigantic numbers. Now what happens is that given this 1,000 of-- about 1,000 packets, they are not there for your life. What happens is that the packets gradually disintegrate and get replaced by a new one. It's about once every 10 days you lose a packet, and you gain a new one. Now that means that some of this stuff gets sloughed off. And one of the reasons people think that the photoreceptors ended up facing away from the light is that they could be close to this inner part of the retina where anything that's sloughed off can be absorbed rather than being just thrown into the eye itself, into the vitreous, because if that were to happen over many, many years, the vitreous would become cloudy, and you couldn't see. So those are two possible reasons why this strange arrangement has evolved. And you see this in virtually all species. There are just a few species that have-- and most of those are actually in the sea-- who have photoreceptors that face towards the light. All right, and then if you proceed here, the other amazing thing that had been discovered is that all the photoreceptors hyper polarized to light. Again, they do the opposite of what people had thought. You'd think that when photons come in, they would activate the photoreceptors, and they would send the signal down the stream through the eye. Turns out the opposite happens that the discharge on the neurotransmitter here occurs when there's a darkening rather than an increase in light. That's an important factor to remember. That's true for all the forests accept as all photoreceptors hyperpolarized to light. You know this well already. I must have said that about 10 times by now. Now the amazing thing is that when you come to the bipolar cells, the next set of receptors here, it was discovered that two basic types of several different types like from for the major parasol cells, but there are two basic types, the on and the off. And this is accomplished by having two kinds of synapses in the on and off bipolar cells, sign conserving ones and sign inverting ones. This is accomplished in the on bipolars by virtue of the [INAUDIBLE] six receptor site and the [INAUDIBLE] one and two for the off bipolar cells. So that means now that you have signals in some of these cells when there's an increase in light and the signals in some where there's a decrease in light. So that's the situation for the on and off bipolars. And then when you come into the level of the ganglion cells, two major classes of ganglion cells are the on and the off. Now I'll talk about that in more detail in just a minute. Now the other interesting, curious thing is that when you look at the rods, the rods-- and they connect to their bipolar cells. They are all sign inverting synapses. They only come in one type, at least in humans and in primates. So what happens then to create on and off section done in the inner retina by virtue of having a synapse here [INAUDIBLE] to amacrine cell, which is a glycinergic synapse, and it also makes connections to the on bipolar, which is a gap junction. And this way, it becomes a double ended system for the rods as well as for the cones. So hopefully you guys all remember this. I know it's complicated. But that is something that one doesn't have a choice about. That's how it simply is. All right, so now we move on. And we are going to look at the lateral geniculate nucleus. Here's a cross section of it. I've shown you before. It was discovered that the lateral geniculate nucleus in central retina-- this is a monkey retina. The human is very similar, so input from a monkey retina to the lateral geniculate nucleus. And the six layers consist of two major types, the parvocellular so called the magnocellular layers. And what was discovered is that the parvocellular layers get input from the midget cells that we'll talk about in just a minute. And the bottom two layers, which are the magnocellular layers get input from the parasol cells. And then what happens is that when you go from central vision to peripheral vision, you have a huge change in the productive percentage of midget and parasol cells that you have in the retina and in the lateral geniculate nucleus near the fovea. In the foveola itself, you don't have any parasol cells, but in the fovea itself you do. And there's a ratio of about eight to one. And then as you go to the periphery, eventually they're equally in number. So there's a huge emphasis on the midget system in central vision and the much increased emphasis on the parasol cells in peripheral vision. So that's the arrangement. And this is reflected in the geniculate, which has six layers in central vision after about 18 degrees. And it has four layers in the periphery where the midget and parasol inputs are pretty much equal as reflected by these four layers. So that's the basic arrangements for the lateral geniculate nucleus. Now if we move on-- let me say one more thing. The receptive field properties of cells in the retina, in retinal ganglion cells, I should say, and in the lateral geniculate nucleus are highly similar. They're virtually identical. You have circular receptive fields with centers around antagonism. All right, now if you move on and move up to the visual cortex, what happens is that there's a huge change that arises, the beautiful discoveries made by Hubel and Wiesel for which they had received the Nobel Prize. And this is just a quick view of the monkey brain. Here is area V1. I'll come back to the other areas in a minute. The nice thing about this in the monkey is that this area is [INAUDIBLE] as I had told you. And because of that, it's easy to study the cells and their properties in area V1. All right, so now if one examines the properties of single cells in area V1, it was discovered some major transformations had occurred in the inputs from the lateral geniculate nucleus. And these major transforms can be summarized in just a second. But I will first tell you that there is a differential input from the parvocellular and magnocellular layers, which project respectively to [? 4C ?] beta and [? 4C ?] alpha. And then there's yet another class of cells that originates in the retina that are project into the inter lamina layers, and they project into the upper portions of the visual cortex. So now if one looks in detail at the properties of these cells, which we have discussed quite a bit. You can refer to these as transforms. The transforms of the visual input into the cortical cells. So when you're record from these cortical cells, you'll find one big transform is that these cells, the overwhelming majority of these cells, become orientation selective. Many cells become direction selective, virtually all simple cells and about half the complex cells. So direction selectively becomes very important. We'll talk about that in a bit more detail later on. Then, some cells are spatial frequency selective. Many cells get an input from both eyes. And there's a convergence of input from the on and off channels. This is also true for some of the cells that get a convergent input from the midget and parasol cells. So those are the major transforms that you see in the visual cortex. All right, so now as a result of having made these discoveries, people came up with a question of how is this organized in the visual cortex. And the first point that I had made is that there's a topographic layout of the visual field in visual cortex. But with much more area allocated for central vision than peripheral vision simply copying the relative percentage of cells already in the retina that exist in central vision and peripheral vision and because the thickness of the gray matter in cortex is about two millimeters roughly. And it's constant. More space has to be allocated for central vision than peripheral vision. And as a result of these people that studied the spacial arrangement and organization of the visual cortex-- and the initial model that was proposed, if you remember, is the Hubel and Wiesel model, according to which in one direction you have the alternation of left and right eyes. You have column, left, right, left, right. And in the other direction, you have a systematic change in the orientation of cells. Now this model didn't fare that well because it's not as neat as has been proposed. An alternative model was the Raidial model. And the last one I'm showing here, which I call the Swirl model is not really a model because some very clever experiments that have been carried out by [INAUDIBLE] actually did optical recording and demonstrated that the visual cortex from the top looks something like this where you have indeed systematic arrangement of orientations in left and right eye columns. But it's not a straight linear factor, but it's kind of a swirly arrangement. So that then established what is the layout of the primary visual cortex. OK now the other important thing that we had emphasized is that contrary to some of the popular ideas that people have had that the cells in the brain are feature selective, meaning that they'll extract specific features from the visual scene like say one cell extracts color, another cell extracts a particular face, and so on. It turns out that that's a false impression that people had gained. And instead what is happening that any given one cell processes many different kinds of visual information. And it's the activity of thousands and thousands of cells in a network that can come up with the percepts that you perceive. Now that's extremely complicated, 10 times more complicated than any computer. And it is something that to a larger extent still has not been solved. You don't know how does a person recognize a face. You can tell oh, it takes place in various parts of the brain and so on. But exactly physically how that's done is something that's still remains largely a mystery. All right, so now let's move on and talk about extrastriate cortex. And extrastriate cortex-- here's a diagram of the monkey brain again. Now I'll point out here's area V1. And once you get close to the lunate sulcus here, V2 begins and folds under. And then inside there we have V3. And then actually make folds back out again. You have area V4 here. And then you have areas MT and MST right here. And then, in addition, you have, of course, your infertemporal cortex area, which plays a very important role in complex analyses such as faces. And then you come to the frontal lobe here, in which you have the frontal eye fields and medial eye fields that also process visual information but mostly for eye movements that I will talk about later on. So that then is in a nutshell the arrangement. And much of the work that has been done in the past dozen years or so was to examine what these extrastriate areas do for vision. And I'll come back to that when we talk about higher level visual processing. Now basically the fact is that there are more than 30 visual areas and that there are more than 300 interconnections among them. Initially the idea was-- the feature detection idea-- that each of these areas is specific for analyzing a particular type of percept. But then it became more evident, increasingly more evident, that these areas tremendously interact with each other and perform these very complex analyses based on networks being active. Now the basic major cortical visual areas, V1 I just talked about, V2 I mentioned, V3, V4, MT. Then when you come to the temporal cortex, you come to infertemporal region that I just mentioned. And then in the parietal cortex, we have the lateral parietal sulcrus, the ventral interparietal, and the medial superior temporal sulcrus. So those are some of the major areas. And then as I've already noted, in the frontal cortex, we have the frontal eye fields. And then even we had the medial eye fields, which are not listed here that also play a role in eye movements, perhaps a lesser extent in visual analysis as such. But many of the cells there too have visual receptive fields, although they are very hard to discern. They much more clearly have motor fields than visual field. All right so now what we are going to do is we are going to go back to the beginning and look at the so-called on and off channels briefly. We talked about that a lot. Again, to reemphasize, all photoreceptors hyperpolarize to light. And then because the two major classes of neurotransmitter receptor sites in the bipolar cells, you create a double ended system from a single ended one creating the so-called on and off. Now these systems were discovered initially by Keffer Hartline, who received the Nobel Prize for that remarkable discovery. And he thought at the time that the on system signaled when a stimulus came on. And the off channel signaled when it went off. That was his idea, which turned out to be all wrong because that's not what these cells are about. What these cells are about, as I've pointed out repeatedly, is that they can process both light increment and light decrement with an excitatory response. That means because of the nature, the physical nature, of light that some objects in the world reflect light, and some objects in the world absorb light. Because of this, as you look around, some objects look black, and some objects look white, or whatever. And because of that, to be able to rapidly process something that is a dark object as well as a light object, you need to have excitatory signals to go to the central nervous system to process that. So therefore, we can say, first of all, that we have these cell types. And they won't have sensor surround antagonism. And let me add one more fact here is that they're comfortable with adaptation, that the average firing rate, average maximum firing rate, of a retinal ganglion cell is maybe about 400 to 600 hertz. And that is a rather limited frequency range. And yet, you have to analyze practically over 10 log units of light information. And because of that, the sensor surround antagonism has evolved so that these cells always look at local contrast changes rather than absolutes. So then, if you look at the on and off cells, I've told you, in accordance with the sensor surround antagonism, if you split a small spot of light in the center of receptive fields, on cells fire when you increase it. Off cells fire when you decrease it. But when you use a much larger spot, you get a lesser response because of the surround antagonism. So that's the basic principle of these two types of cells. And then, I told you about these experiments, in which two 2-amino-4-phosphonobutyrate had been used, which is for brief purposes, called APB. And I told you about two types of experiments, one doing single cell recordings in various parts of the brain and the others to do behavioral studies. And what the signal cell recordings had shown is that the-- let me first say what APB does. APB is what? Anybody remember? It's a neurotransmitter analog. And what neurotransmitter is it? AUDIENCE: [INAUDIBLE] PROFESSOR: Very good. Glutamate. All right so what you do is when you inject this substance into the eye-- this is an artificial substance-- it blocks the on cells from being able to respond to incoming light but does nothing to the off cells. So if you do this and study the responses of single neurons in various parts of the brain-- there have been all these different hypotheses as to why we had the on and off channels. One of them was to create sensor surround antagonism. And the other one was to create orientation direction selectivism in the cortex. But it turned out that when you injected APB into the eye, and you blocked the on channel, the off input to the cells, and the off cells therefore, still had sensor surround antagonism. And the cells in the cortex still had orientation and direction selectivities. So these two systems did not arise for the purpose of creating those basic attributes, which are so central for being able to analyze the visual scene. Now the second important finding was that when you did a behavioral study and asked monkeys to detect light increment and detect light decrement, there was a huge deficit in detecting light increments but no deficit in detecting light decrement. So these observations and many other studies analyzing why there are on and off channels came up with the conclusion, which I think is quite valid that these two systems have evolved to enable organisms to quickly respond to both light detrimental and light incremental input. And you probably remember the little quick movie I showed you that you have a fish in the ocean. Fish also have on and off channels of course. If there's a bird in the sky, like an osprey that is seen by virtue of light decrement, your off system tells that fish, oh, there's a bird up there so it can escape. And if a predator from below that is lit up by the sunshine, the on system in response to that and enables this fish to escape. So that's one example of the function, the prime function, of the on and off channels. All right so that's the basic fact then. So to conclude then, the on and off channels have emerged in the course of evolution to enable organisms to process both light incremental and light decremental information rapidly and effectively. So that's the conclusion then in a nutshell of the on and the off channels. Now we can move on and look at the so-called midget and parasol cells that had been discovered initially in the cat. And they were called the x and y cells. In the monkey, it's called midget and parasol because when you look at them anatomically, the midget cells are small. And they're very small dendritic arbors where the parasols cells are much bigger and have much larger dendritic arbors that look like an umbrella. So those two systems were discovered, and statistical analysis revealed that they are totally separate types of cells. They're not a continuum. So the question then became why did these two systems evolve? And why did nature go to such trouble as to make sure that they were separate in retina and separate in the geniculate to the monkey? And then, in the cortex, sometimes it remains separate-- sometimes the two systems remain separate-- and sometimes they converge as I had noted in those transforms in area V1. So now if you look at that, you've seen this several times now. The midget system, the center in central retina consists of just a single cone. And therefore, this system should be able to tell you about color whereas the parasol system has mixed inputs both in the center and the surround. Furthermore, the parasol system response much more trangently than the midget system. So temporal information can be processed more effectively by the parasol system than the midget system. So those are the initial observations at the single cell level. And then behavioral studies were carried out in which either the midget or the parasol system were selectively blocked. And then performance was tested where those systems had been blocked and where the systems were intact. And when this was done, some major findings emerged. Before I tell you about that, let me just reiterate again what these connections are. Here we have the midget and the parasol cells as well as the cornea cellular cells. They project through the geniculate up to the visual cortex. And then from there, there has been lots of debate as to what is the nature of the connections to higher areas in the brain. And we talked about that quite a bit. And some beautiful studies had been carried out showing that the input to area MT in the parietal lobe is dominated by the parasol system, but the input to V4 in the temporal lobe is about equal for the two systems. So that was the basic factor then. And so now the question then comes up, what is the contribution of these two systems, the midget and the parasol? And so experiments are carried out where lesions are made in either parvocellular or magnocellular geniculate. And then the monkey was tested, as I've said, in intact areas, in areas where the midget system and areas where the parasol system had been blocked. Now one additional fact is that when you block both of these by lesion in the lateral geniculate nucleus, for the most part, the monkey becomes blind. OK, so these two systems are really central for being able to process visual information. All right so now, if one looks at what kinds of deficits arise, a monkey can be trained in a whole bunch of different tasks. It talked about these. Color vision, texture perception, pattern perception, shape perception, brightness, [INAUDIBLE] scotopic vision, contrast sensitivity, stereopsis, motion perception, flicker perception. We'll talk about those first. So it was found that there was severe deficits after a parvocellular lesion, meaning when the midget system was blocked in color vision, and texture perception, pattern perception, and shape perception. Also in contrast sensitivity and severe in stereopsis. None of those cause a deficit with the magnocellular region, mean eliminating the parasol system. But when examine motion perception and flicker perception, there was a moderate to major deficit where that system was missing. So that's then established. I'll come back to these later. Established at least in some people's mind why these two systems have emerged in the course of evolution. And so a summary statement to that effect is shown here. If you look at the ability to process spatial frequency by the midget and parasol system, the midget system can process it up to much higher spatial frequencies. The obverse is the case when it comes to temporal frequency. The parasol system can process to much higher levels of rapid motion or flicker, as you can see in this little diagram. So the midget system extends the range of vision in the spatial frequency wavelength range. And the parasol system extends it in the temporal frequency range. So that's why these two systems have evolved. And then if you look at this in terms of the relative number of cells in the retina that are devoted to these two attributes-- I told you that in the foveola, there no input at all to the parasol system. So therefore, what about this fine vision that the fovea makes possible for you is due to the fact that area is dominated by the midget system. Then as you go progressive to the periphery, that ratio changes as I had just shown you because increased emphasis has to be placed a seeing motion and rapid changes in the periphery. So that's what happens with the parasol system's increased number of cells in the periphery that can handle that requirement. So now we're going to move on and talk about various aspects of visual processing. And we'll start first with color vision and adaptation. As I've shown you before, one of the beautiful advances that had been made initially actually, believe it or not, by Newton-- I mentioned that I think-- was the discovery of-- I shouldn't say discovery-- the invention on the color circle. Now this invention arose in part because it was established-- it's a well-known fact-- that we don't have opposites along these axes. You don't have a yellowish blue color. You don't have a reddish green color. But anything that's not an opposite in this color circle, you do have. So you have yellowish red, or you have yellowish green, and so on. So the color circle was then elaborated upon over many years. This is a slightly modified version from what Newton had invented. And this is set up in such a fashion that when you go around this circle-- I should say disk I suppose-- you change the hue of course. And then when you go from the center of the periphery, you increase saturation. This is not the perfect display, especially because the projector isn't perfect. But the center is supposed to be white. And all this is fairly equal luminent. And so you go from unsaturated to saturated. Now I will say already at this point another very important factor in appreciating the beauty of the color circle is that when you analyze after images, it was found that if you adapt to something that's yellow, you adapt the eye to this wavelength. And then you shift it to white, then you get an after-effect, which is blue. And if you do that for red, the after-effect is green. And the same thing is all the way around the circle. If you have this one, the after-effect is here. So the color circle perfectly predicts what you're after images are due to adaptation, which occurs as a result of having bleached selectively the molecules in the various cone types that we have, the three cones, red, green, and blue. So that's the basic rule of the color circle, which can be used extensively. And I think you yourself can have a lot of fun studying this in your off time, which you don't have too much of I'm sure. But it's really a wonderful thing to play around with. Since this course is rather heavily fact oriented, I want you to remember these basic facts that I had listed before. I just noted along the color circle, you have three attributes, hue, brightness, and saturation. And then I also mentioned to you that there is a distinction between the psychological and the physical attribute of images. And this arrangement is such that I gave you an example of. For example, when a tree falls in the forest, is there a sound when there's nobody around? And the answer is a distinctive no. Why? Because sound is a psychological attribute. If you're on the other hand, you'd have said well, if a tree falls in the forest, through some wavelength result that are in the range of hearing. And that, of course, yes. But if you say sound, that's something you hear. It's not something that's a physical thing. So that applies to many aspects of vision, as well as audition and many other senses that you must make a distinction between what your psychological disposition is as opposed to what a physical fact is. The next thing here is that we have three types of photoreceptors, a short, medium, and long wavelength for the cones. And then we have also a different wavelengths peak for the rods. All of these peaks are broadly tuned to enable you to-- some are in the brain-- examine the relative amounts of information from these three wavelengths. That then enables you to perceive many, many other colors partly because of color [INAUDIBLE] and partly because of variety of amount of activity of them. And then I mentioned Grassman's laws. Every color has a complementary, which when mixed properly yields gray. That should do with again with the color circle. So in other words, to go back to that, if you mix yellow and blue in equal amounts, you get white or gray I should say. And the same thing for anything that's and opposite. But then if you mix things which are not at diagonals to each other, then you get an in between color. So if you mix yellow and green, you get yellowish green. All right so that's Grassman's laws. So if you have non complementary colors, you get intermediate. And if you get complementary colors, you get gray. Again to make sure that you understand this, complementary means this and that, this and that, this and that, which are on the opposites on the lines that intersect the center of the color circle. We move on. And we talk about Abney's law. That is not very important. And you don't have to remember that. The luminance of a mixture of differently colored lights is equal to the sum of the luminance of its components. That another fact, but you don't need to know that. The last thing that I want to mention is so-called metamers, which are stimuli which look the same but are the product of different subcompositions of wavelengths. So because we only have three different cone photoreceptors, you can, in a sense, if you will, fool them a little bit by very carefully mixing things up with different wavelengths to activate them equally. So that is what is called a metamer when you can't tell the difference between two stimuli. They look identical even though their wavelength compositions are different. OK so now another factor that I should note here when it comes to color vision is that when you look at the response characteristics of cells in the retina-- and I'm talking about the retinal ganglion cells-- when we look at the cells in the geniculate, which is here, what you find actually is just a few major categories. This is a color circle here. And one presents the stimuli around the circle and see how the cell responds. And what you're see here is one cell, which is a blue on cell, a green off cell, a yellow on cell, and a green on cell. Now it turns out if you record from hundreds and hundreds of cells, you only get these categories. You don't ever get any cells which are at the diagonals. So to see the diagonals, something has to be taking place in the cortex on the basis of what is coming in from the retina and the lateral geniculate nucleus. Now when you come to adaptation, we talk about that quite a bit. And also with after images, I'll come back to that in a minute. It was discovered in some very nice example that you take a cell, and you adapt it to various levels of overall illumination, and then see how the cell responds to it, and then you stimulate the receptive field. What you find is that here's the same cell. Here's a background illumination of a huge range over five log units. And what you find is that when most of the cell is adapted, it responds always the same. So it's looking not at over all levels of illumination, it's looking at differences in illumination. It's looking at contrast. Now how many of you remember the formula for contrast? Anybody? All right, I think that's a really good thing to remember. I'm sure when you go to a party, people would be fascinated by you knowing the formula for contrast. OK, so contrast equals-- you take the stimulus, which is call it x. And you take the background, which you called y. You subtract one from the other. Then you add the two up. And then you multiply this by 100. So that your contrast. What this formula means that this applies to endless levels of overall illumination. You can do this in the sunshine. You can do this in the moonshine because you're looking at the differences between the background and the stimulus itself. AUDIENCE: What is x and y? PROFESSOR: As I've said, x is the illumination level for the target. Suppose you take a cell, and you shine a spot of light on it like that. Then you remove it, and you measure the background level. And so x is a visual stimulus. y is the background. We talk about light adaptation. Again, I want you to know a few basic facts. The overall level of illumination is close to 10 log units. But in contrast to that, if you just look at reflected light, that varies over a much smaller range because on the very bright illumination conditions, even a black object will reflect some light. So you're talking about direct light versus reflected light. So when you do reflected light, you get a smaller range. Now the pupil plays a role in the amount of light it controls getting into the eye. But it can only do that over a range of 16 to one. Now because of that, the major role of adaptation has to do with the photoreceptors in your rods and your cones. And the way that works is, if you remember, is that you can think of your molecules in your photoreceptors as existing in two forms, bleached and unbleached. And because of the millions and millions of them that I told you already about, 1,000 times 10,000 in just a single cone, there's a relative percentage of bleached and unbleached molecules in each cone and in each rod. And so what is happening is that during dark adaptation, there's a huge increase in the unbleached, and during light adaptation, there's an increase in the bleached molecules. So therefore, any increase in the rate of at which quanta are delivered to the eye is also in the proportion of decrease in the number of pigment molecules available to absorb those quanta. Retina ganglion cells are selected sensitive to local contrast differences not absolute levels of illumination. I've said that many times over again. OK, and that's why this formula, this contrast formula, is one that's the most useful in being able to depict what kind of input these cells are getting. So that then is the arrangement about light adaptation. Now let's move on to depth perception, which is one the most intriguing capacities that we have since our retinae essentially are like a two dimensional surface. So whatever comes onto the retina, some mechanisms have to be able to tell you where things are in depth. And because it's such a complex problem, quite a number of different mechanisms have evolved to make it possible for you to do that. And that means that first we have oculomotor cues. We don't need to talk about those. But we have visual cues, which have binocular cues, stereopsis we talked about quite a bit. And the binocular cues are motion parallax, shading, interposition size, and perspective. All these cues we can utilize to tell us where things are in depth. So now if we look at stereopsis-- I've handed out to you some of these autostereograms. If you look at these, you can't see it looking at that. You have to do it on those sheets that I handed out to you. You can see something in depth. And this arises by virtue of the fact that stimuli are arranged in such a fashion that they selectively activate neurons in the visual cortex that code depth by virtue of the fact that they get disparity inputs from the two eyes. Now another central mechanism-- I should add one more thing about stereopsis. I think I've mentioned it. In fact, 10% of the population in the United States lack stereopsis in most cases due to either misalignment of the two eyes or do to ambliopia, meaning one eye doesn't see as well as the other. But those people can still do many things and do depth quite well. They can't thread needles, but they can do courser depth quite well. And one of those is due to motion parallax. Now the basic rule about motion parallax that cause the brain to evolve to analyze it is that when objects are different distances from the eyes as depicted here, the objects that are closer to the eye when this object moves over a greater range on the retinal surface than those that are further apart. You can see the green versus the red. So therefore, the system is such that it has evolved to be able to see small differences in the relative motion of objects in the retinal surface. And I showed you an example of that. And I'll show it to you again because this is fun. This is essentially similar to the random autostereogram except it's just a single bunch of random dots. And as soon as I set this in motion, you see them in three dimensions beautifully because these move over a greater range than these. And these move even less so. So this differential motion commands you to see it in depth. So that's quite a remarkable ability. And monkeys are even better at it than we are. And even fish have this kind of capacity as do many, many other species. It's so central to our ability to process depth. OK Studies have been carried out to determine where and how these are analyzed. And when we came to where, here's an example of looking at a brain in a normal stereo-blind subject. When you only present motion parallax, you only presents stereo. And when you do the stereo, monocularly, you don't see depth, and the brain is not active. So this tells you which part of the brain is active and involved in analysis of stereopsis and which one is involved in the analysis of motion parallax. I showed you this picture and several others telling you which areas it is. The limitation of that is that it can tell you where it takes place in the brain, but it doesn't tell you how it takes place. So because of that, many studies have been carried out doing single cell recordings in these cortical areas. And it was discovered that the there are indeed cells already in area 17 that get disparate input from the two eyes. Beautiful work by [INAUDIBLE] showing this. And establish therefore that already in area 17, you have neurons that tell you about stereoscopic depth. And then it was also discovered, especially in area MT, to a lesser extent already in V1 also, that you have cells that respond to differential motion. And so those cells are presumptively involved in the processing of depth information based on motion parallax. Now another mechanism involved that we talked about is shading. Light coming from above like from the sun had been incorporated into the visual system to tell you whether an object is towards you or away from you. And this is an example of that. Here the light is from above, and the darkness is below. This is reversed here. And because of that, you see this as protruding, and you see this as receding. And so I showed you several examples, some in the handout, of the fact that even shading is a hue that's used quite extensively in depth perception. Now we come to form perception. I'll talk about this briefly. I mentioned three kinds of theories. One is that the former is due to the fact that neurons respond selectively to line segments of different orientations in V1. Another theory was that they have a spatial mapping of the stimuli on to the visual cortex since you have topography. And the third one is that form perceptions are accomplished by Fourier analysis. We talked about each of these. And I pointed out to you that even when there are no orientation segments in the display, you can still see and identify faces quite well, as seen in the Wall Street Journal where these kinds of pictures appear every day in the paper itself. Now then if you move on, and you look at the layout of how the cortexes-- this is a monkey cortex here. This is a visual field. If you present these three stimuli in the visual field, this is the area that's activated in the cortex because more area is allocated to central vision and peripheral vision. And so you say oh, this is much bigger than those. But that's not the case. You can tell that they're identical. Now even more dramatic is the fact that if you put these three disks centered-- OK, so half of it goes to the ipsilater and half of it to the contralateral visual hemisphere. What you get are a bunch of half circles like that. And it doesn't look anything like that. So the idea that somehow images are laid down in the visual cortex, and the mind then looks at it is totally wrong. It's wrong to the extent that it's ridiculous. The last analysis theory is accepted by some people. And doing computer analyses has revealed that system actually can be mimicked extremely well based on what we know about the organization of the visual cortex. It has all the basic attributes that you need, orientation, direction selectivity, and phase that enable you to break down the visual scene in an analytical fashion, which is kind of foreign to our thinking, namely Fourier analysis. Then we spent some time talking about prosthesis, which you're going to hear quite a bit about actually when Chris Brown is going to lecture because that has been so successful in the auditory system with the cochlear implant, which is a remarkable achievement. We have many more than 50,000 by now in the United States who have cochlear implants. And they can talk and do all kinds of remarkable things. We don't have this in the visual system. And I've mentioned to you that one of the big differences is that in the retina, there are more than a million fibers in each eye that come from the ganglion cells that project into the brain, whereas when you talk about the auditory system, you only have about 30,000 fibers. So the magnitude is much less. But also there are other factors, namely the retina is a very difficult structure to work with. And also when people become blind, most cases the retina degenerates. So you can't put a device into the eye very effectively in most blind people to create vision. So another alternative is to try some other regions. Some people have advocated to do this in the visual cortex. And the problem there is we have the huge magnification factor. So if you put 256 stimuli like this in the visual scene, this is the actual physical activation. And once you know what this layout is, then you can put electrodes in, which are spaced like this. Then, if you were to stimulate these, then you would create an image, which is at least moderately similar to this that would be in slightly different washed out colors, but which would still essentially be a square. So if you do that then, and you take a camera, and you take the input to the camera and connect it selectively to this proportional implant. If you put the word, fiat lux-- remember what that is? Let there be light. You get a pretty good reproduction of what has been put in there. But by contrast, if you take a ray of electrodes, which are equally spaced-- then, if you activated all those, you would get a butterfly image. And if you then put in the fiat lux to the camera, it would look like that. So that would mean that they would get a pretty false impression of the world, and you wouldn't be able to even read. So therefore, it would be very important to take into account the functioning of the visual system, as well as functioning of the individual neurons if you are going to create a prosthetic device. So now we will move on. And I will say a few words about illusions. We talked about quite a few illusions, and you got some of those in the hand out. The one I mentioned to you that I think all of you enjoyed is the Hermann grid illusion, which shows the smudges at the smudges at the intersections. And the famous theory that was advanced by Baumgartner is that it's due to the fact that if you have a cell that is centered around here, as opposed to not at the intersections, this cell would be inhibited more than this one. So this hypothesis had been accepted by many people a few years back, and it has appeared in many, many textbooks. It turns out this theory is all wrong if you remember because first of all, here you just make a small change in the physical layout of the lines. And you don't get the effect at all, even though if you put a cell here, as opposed to here, the arrangement is still the same. So consequently, that theory is wrong. And it's even further proven by the fact that when you analyze it physically to see what the number of cells is in this area here-- and this is for parasol cells. And this is for the midget cells. You have a huge number of cells. And this is shown only for the on cells. You can double that for the off cells. You would activate this teeny area here, five degrees from the fixation. You would activate 365 midget cells and 50 parasol cells, half of which would be on and half of which would be off. So this theory is just incorrect. And so alternative ideas have been developed still sort of under debate. And one is that this takes place because of the simple cells in the visual cortex as we have talked about it. Now then, another set of illusions we talked about are the after-effect illusions. And the experiments that we so informally did here asked if you look at a particular display, you fix it for a while, and then you change the display, you have an after-effect, right? A very dramatic after-effect. And one of those was the rotating dots in the circle. And I showed it to you. The experiment was that you adapt to it with one eye, and then you look at the display afterwards with the other eye, and then you would have no effect, which proves that this takes place in the retina and proves it is due to the adaptation that takes place in the photoreceptors. All right so those were the so-called interlocking experiments we had discussed. So now let me move on and talk some more about the deficits in vision arise as a function of lesions. And I already showed you a whole set of those when we talked about the lesions of the midget and parasol systems. And now if you look at this in more detail, we add to this, what happens when you remove V4 and remove MT? And it's quite striking that the deficits are far, far less when you take out the midget system. You have very mild deficits with V4 lesions for most of these up here. These are basic visual capacities. But MT lesions do give you pretty much the same deficits as a magnocellular lesion that blocks the parasol system. Now then when higher level visual capacities we have now analyzed-- I showed you those as well-- you found that there was some dramatic deficits with V4 lesions when monkeys had to choose less a stimuli and had to learn visual percepts, they had severe deficits with a V4 lesion. So that suggests that an area like V4 plays a very important role in higher level visual processing. Yes? AUDIENCE: What does the pronounced mean? Is that-- PROFESSOR: Pronounced? It means like a strong deficit. AUDIENCE: So more than severe? PROFESSOR: No, no. You can see by the color also. Severe is the strongest. Pronounced is strong. Moderate is weaker. And mild is mild. [CHUCKLES] OK so now, next I want to turn to eye movement control. And when we do that I want to remind you that the many cortical areas as well as the subcortical areas that play a significant role in eye movement control. And one way to test this is to electrically stimulate various regions in the brain and see if you get any eye movements. And this happens in many areas. The ones we have here are superior colliculus, of course, we talked a lot about. V1, LIP, the medial eye fields, and the frontal eye fields. Now in all but one of these you get a constant vector [INAUDIBLE] at any sight where you stimulate, meaning no matter where the eye is looking when you stimulate, you've got a particular vector as depicted here. The exception to that is the medial eye fields where you have a place code. The result of stimulating any given area is to bring the eye, normally where the eye is, into that motor field. Now different regions obviously in these areas have different vectors. So that's the basic layout. And then the question arose how do these get down to the brain stem ocularmotor complex that drives the eye muscles that we had talked about. Well, the way the experiment was done then is to remove the superior colliculus. And when that was done, what you found was really quite dramatic, namely that you could no longer drive cells from the posterior part of the cortex, but you could still drive them from the anterior part. This led to the idea that you have two major systems in saccadic eye movement generation, the so-called posterior system and the anterior system. And then when people looked at the question of well, we have these two systems. What do they do? It was discovered that the posterior system is very important for generating quick saccades, especially express saccades, because when you remove the colliculus, you never got an express saccade again. And the anterior system plays a very important role in stimulus selection and the sequencing of eye movements because you make so many eye movements in rapid succession, you have to make plans ahead to decide where you're going to look in a sequence. And that was found to be very important for the frontal eye fields because when you remove that, there was a major deficit in target selection and in sequencing. So then when this was done, we also examined, if you remember the question, of what is the role of these various areas when you block inhibition or you increase inhibition. So we use muscimal and bicuculline to do that as shown here. And it showed that with V1, you get a strong interference with both, and you also get a strong deficit in visual discrimination because to be able to analyze the visual scene, you need to have interaction between excitation and inhibition both for eye movements and for visual discrimination. And then with a frontal eye field lesion, you've got facilitation as you did in the colliculus when you put in bicuculline, which eliminates inhibition. The monkey couldn't help but makes saccades. But you've got interference with muscimal. LIP had no effect. So that's then in a summary was what we had discussed. And this is something, of course, you need to go over again in your notes, and in Stellar, and in the assigned readings so that you can remember this for the exam. OK and then I pointed out to you that even though we never think of eye movements, we have an incredible number of structures and a number of tasks to be able to make each eye movement. We have to select a target. We have to decide what each-- every time you move your eye, there are dozens of targets. We have to select one of those. Then we have to decide what they are. Then we have to decide which one to look at which one not to look at. And then we have to use our system, which is a spatial organization of the motor fields to eventually generate an eye movement. Now in reality then what happens is that many other systems-- I showed you this before, too-- many other systems are involved in generational eye movements, hearing, touch, so on. And we had generated all sorts of systems to enable for you to do this. the so-called anterior and the posterior systems that reach the brain stem through various channels here. This is available for you on the internet. It's also available to you on the assigned reading. So now lastly we'll turn to motion perception. And when we talked about motion perception, I pointed out to you that in the area of V1, we have simple cells, and complex cells, several different classes of simple cells. And almost all of these cells, if you look at their responses to light increment and light decrement, meaning light edges and dark edges, almost every one of these cells is direction selective. And it's also true for most complex cells. About half the complex cells, maybe more, are also direction selective. So direction selectivity is one of the most central features in the visual system that we use extensively not just to analyze motion but also to be able to see depth by virtue of motion. Paradox So now we can say, because of all those little experiments I had shown you, that the parasol system and because of the lesion experiments plays a central role in motion analysis. And when we do those experiments with a apparent motion, when we moved little spots in color or in small differences in shape, color and small difference in shape didn't matter, indicating that the parasol system plays a central role in us seeing apparent motion the way we see it. All right. So now, last very briefly I want to say here is about the accessory optic system because this is what you're going to be writing about. And I just wanted to remind you that the basic discovery was that in the retina, the cells of [INAUDIBLE] that feed into the accessory optic system come in three different direction selectivities as shown here and that these three direction selectivities correspond to the direction selectivity of semicircular canals. That's quite a remarkable discovery. And this then enables the organism through the system, which by the way these cells respond to all the slow movements, that's prime function is to-- so they claim, and I think that's correct-- is to stabilize the eye with respect to the world. So when you walk around, what happens is you can still see the world very clearly with no blurring because the accessory optic system adjusts the eye to keep it stable with respect to the world out there. And in fact, I can't remember if I told you this story. Way back when in Germany when some people were treated for pneumonia, they used the drug-- I forget the name of it right now-- that caused malfunctioning in the semicircular canals. As a result of that, that system, which co-exists with the accessory optic system no longer was able to stabilize the eyes. And so here was this guy in Munich living in a neighborhood where he had lived for many, many years. And he realized that he can't see anything clearly when he's walking on the street. Everything was blurry. And so he said oh, my god. I won't be able to recognize my friend. I won't be able to say hello to him. And so what he learned to do is this. Hi, Joe. Like that. He stabilized his head by holding it. So that highlights for you the fact that this system of stabilizing the eye through the accessory optic system plays a very important central role in your being able to move around in the world and being able to analyze the visual scene in spite of the fact that you're moving around. So the last thing I wanted to show you-- first, I'm going to show you one more picture. But first, let me say a couple of words about the exam again. I've told you the exam is a multiple choice exam, probably something about 100 or so questions. Almost all of them deal with basic facts, I should say basic facts. So you've got to know your facts. And what you need to do is read each question, circle the choice. You don't get punished extra for being wrong. If you're wrong, you're wrong. But I'm not going to subtract on top of that the wrong answers from the right answers. So choose an answer, even if you don't know it for every question. And you'll have a probability of one in four, maybe one in five of getting the right answer if you're totally ignorant. So that's what the exam is. It's going to take about an hour or so, hour and a half maybe, depending on how fast you read and how fast you make decisions. And that's going to take place this coming Wednesday right in this room. Now the last thing I wanted to show you is-- I mean I seem to be so certain about everything being right and wrong here. I just wanted to tell you one thing, a note of caution. And the caution is this one here. This is a wonderful sculpture by Naum Gabo. I don't know if you've ever heard of Naum Gabo. Anyway, this is obviously you can almost instantly say it's an upper body and a face, right? But the fact is that, as I say here, as many scientific hypotheses of brain function are appealing but a far cry from the real McCoy. So we are still groping. And yes, we are a long way from phrenology, but still many of the hypotheses and ideas that we have are wrong and are more like a cartoon of what it really is like. And of course, the further up, in my opinion, you go from the retina, the higher the fancifulness of the ideas. At least when it comes to the retina, I think we are reasonably comfortable that we know a lot about the photoreceptors and how they interact. And that's fairly close to the way it really is. So that may be more like a photograph of Obama. But when it comes to the cortex of the higher areas, things are a bit more like that. So that's the end of it then. Thank you very much. And I wish you the best of luck on your exam on Wednesday. [APPLAUSE] Oh, thank you. Thank you. That's very nice. I appreciate it.
MIT_904_Sensory_Systems_Fall_2013
20_Sound_localization_1_Psychophysics_and_neural_circuits.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Last time we were talking about descending auditory pathways and the brainstem reflexes, which are, in some sense, part of those descending pathways. And especially how they protect our sense of hearing from damage. So I had a question about subway noise on the Red Line in Boston. And I was asking around at my hospital and nobody seems to have measured it. But I saw online that there were some measurements of subway noise in New York City. And that it can actually be damaging if you get exposed to it for long enough. So there's a little news clipping about subway noise. So I don't ride the Red Line very often. I ride a commuter train from Lincoln, where I live, to North Station. And when that train comes in and puts on its brakes, it's incredibly shrill. And I always plug my years. And about half the people do and the other half have hearing loss, I guess, or whatever. It's not really a laughing matter. So any questions from last time? Of course, the brainstem reflexes we were talking about are also good for other functions, like reducing the effects of noise masking, allowing selective attention, those types of functions. I also have experimental evidence in support of them. So if there aren't questions from last time, we are going to shift gears a little bit and talk about something completely different, which is sound localization. And so today's roadmap is titled-- headlined-- "Sounds Localization." And we're going to be talking about the kind of localization using binaural cues, where you have two ears. And because you have two ears and because sound sources are located off your midline for the most part, we have some very prominent cues called Interaural Time Differences, ITDs, and Interaural Level Differences, ILDs. OK, so we'll talk about what those are, how big they are. We'll talk about performance for localizing sounds in humans. How good are we at doing that task? We'll have some demonstrations. One of them I'm going to give you in the room here and the other three demos we'll have to listen to in headphones because we want to use just one of these cues, interaural time or interaural level differences. And for the most part in a room, in an ordinary environment, they're mixed up together. They come together. But using headphones we can isolate and play just one or the other. Then, we'll launch into the neural processing of one of these cues, ITDs, in a part of the superior olivary complex called the Medial Superior Olive, or the MSO. And so just a heads up here, last lecture we were talking about the neurons in the olive called the medial olivocochlear neurons. Those are completely different. They have nothing to do with sound localization. So those were the MOC neurons. And today, we're talking about MSO neurons. These are completely different. They're neurons in a major part of the superior olive called the medial superior olive because it's in the medial part. And then finally, we'll end up with a discussion of the assignment. We have a written assignment for audition that's due in a few weeks. But it is based, in large part, on today's lecture. So it talks about the model of neural processing in the MSO. And another heads up. I've added a tiny little bit to the end of the assignment. And we'll talk about what I've added. And that revised assignment is now posted on the course website. We'll talk about that at the end of today's class. So sound localization using binaural cues. What are the two cues? Well, they're the interaural time differences and interaural level differences. And this subject, cat, is listening to this sound source at the black dot. The sound source is emitting sound. And because sound doesn't travel instantaneously through air, the sound located off to the right of the subject strikes the subject's right ear first, and then there's a little bit of time before it gets around to the left ear, which is pointed away from the source. OK, so if this sound source is emitting a click as diagrammed here, so this y-axis could be the sound pressure level at the subject's right and left eardrums as a function of time. This is the time axis. So obviously, the sound source is off to the right. The right ear is going to receive the sound pressure first. And then a little bit later, the left ear will receive the sound source. I mean, will receive the click sound. If the sound emitted is, instead of a click, a continuous wave, like a sinusoidal wave. It'll be delayed the same amount right versus left. So here's the right ear sound pressure level, and then here is the delayed version that appears at the left eardrum. Just a delayed version. So how much is the delay? Well, it depends on things like the velocity of sound in air. Sound in air. I think we had this in the very first lecture. 340 meters per second in air. Now, it depends a little bit on the temperature of the air. And it depends a little bit on the barometric pressure, but these factors change at less than 1%. So about 340 meters per second in air. So knowing how many meters between the right and left ear, you can calculate for various positions of the sound source the interaural time difference. Now, if you have a big head, so that the left and right ears are separated very far apart, obviously you're going to get a bigger interaural time difference than if you have a tiny, little head. Like a mouse sort or a bat. The smallest animals have very close eardrums, so they have very small interaural time differences. OK, now a couple other things I should say about both of these cues, interaural time differences and interaural level differences, is they help us detect where the sound is coming from in the azimuthal plane, which is the horizontal plane. So here's a schematic of a person's head with the left ear. This is the front of the head. And the right ear is behind, so you can't see it. And so azimuth is in the horizontal plane. And that's where interaural time difference changes as a function of position. The other plane perpendicular to that is the plane of elevation. So for me going straight up and straight down is the elevation of the sound source. And you can imagine if there's a sound source straight ahead, it's going to strike my two eardrums at the same time because the path length from the source to the eardrums is the same. And if that sound source moves up from being straight ahead to, say, elevated from straight ahead. But still, it's in the same place relative to the eardrums. The path from the sound source to the two eardrums is going to be the same. So the ITD does not change as a function of sound source elevation. So these binaural cues that we're talking about here do not change as a function of elevation. So how do we detect the change in elevation of a sound source? Well, we talked earlier in the course. I think in the very first lecture about the so-called "pinna" cues. And so we have these external ears, the pinnae. And they help us greatly in detecting sounds that differ in elevation because they put onto the sound spectrum some very prominent peaks and nulls. And those peaks and nulls change as a function of sound elevation. So go back to the very first lecture that I gave you and review that. And we read a paper where if you distort the pinnae by putting little plastic or clay ear molds in your pinnae, your detectability of sound sources that change in elevation goes to pot. You can't do it anymore. But if you go out and re-experience sound with those little distortions of your pinnae and come back in a few weeks, you can re-learn how to detect changes in sound source elevation. And in those kinds of things, two comments. Number one, you don't need two ears. You can detect change in sound source elevation with just one ear, because one ear has a pinna and it works just fine in changing the spectrum from one ear. And number two, your sensitivity to small changes in sound source elevation is not very good. So if you change the sound source elevation of 10 degrees, most people cannot detect that change. The minimum changes in sound elevation that are detectable are more like 30 degrees, which is a pretty big change. We'll see using the binaural cues to detect changes in sound source azimuth, we're down to about one degree using these binaural cues. So they're much better. You're much more accurate in terms of localizing sound in azimuth than you are in elevation. Secondly, I should have mentioned here when we had this time delay for the ongoing signal, like a sinusoid wave, that is equivalent to a phase difference. The phase is what engineers use to define where and when a sinusoidal source starts and ends. So engineers talk about a sinusoid going through a 360 degree phase for one cycle. And so this sinusoid is delayed about a quarter cycle relative to the right ear one. And so it has a phase lag or a phase difference of about 90 degrees. So you can talk about interaural phase differences for continuous waveforms, like sinusoids. And you talk about them in terms of the number of degrees of phase difference. And of course, to convert from one to the other you need to know the frequency that you're talking about. Because if you're dealing with a high-frequency sinusoid that goes back and forth a lot of times, you have to know that to calculate the interaural time difference. But some of our demos that we listened to will quote the interaural delay in terms of phase. An interaural phase difference. Now, the second cue is interaural level difference. And here's the same subject. Here's the same sound source. The sound is coming to the subject's right ear and it's a high level there because there's a direct pathway. But the sound to get to the left ear of the subject has to go around the subject's head. And sound does bend around a wall. So I can go out in the hall and I can say, class, can you still hear me? Yes, you can still hear me, right? The sound is bending around. Of course, some of it's reflecting. Sound bends around, but some sound bends around more easily and more effectively than others. And obviously, this sound bent around. There's still sound at the subject's left ear. This should actually be delayed because it took longer. But the purpose here is that it's lower in amplitude and it's not vanishingly small. So there's less sound over the subject's left ear. And this is the interaural level difference. This interaural level difference, as we'll see in just a minute, depends greatly on sound frequency. Such that very low-frequency sounds, because they have a long wavelength relative to the object they're bending around. Because of their physical characteristics, they bend around very well. And so the interaural level difference, as we'll see, is almost 0 for low frequencies. For very high frequencies, there's a big, so-called sound shadow. They don't bend around an object the size of the had very easily. And so there's a big interaural level difference for high frequencies. So we think of these cues then as being very important for high frequencies. So let me just note that down here. So ILDs-- I'll use a different color. Important. And let me show you some data to support that then. These are some data taken for the two cues for a human head. And how do we get these data? Well, we can take two small microphones and put them down in our ear canals very close to our eardrums. And we can measure the time difference and the level difference. And so if you were to do this, you'd seat a subject in an anechoic room. An anechoic room is one that doesn't have any echoes coming off the walls and the ceilings. So that the subject just experiences the direct sound from whatever sound source. And the sound source we're going to move around. So this x-axis is the sound source position in angles from directly ahead. So directly ahead is 0. And higher angles is off, let's say, to one side. And directly off to the side is 90 degrees. And then behind the subject is greater than 90 degrees. And directly behind the subject will be 180 degrees. So this is degrees of azimuth from directly ahead. This top graph shows you the interaural time differences and the bottom graph shows you the interaural level differences. So how do we compute time from the microphone signals? Well, we simply take our microphones and run them to an oscilloscope. So here's the head. Here's microphone 1 in the ear canal on the left side. Here's microphone 2 in the ear canal on the right side. You send the wires from those two signals to an oscilloscope. The top channel will be the signal coming from the left side. The lower channel is the signal coming from the right side. And this device, the oscilloscope, plots the voltage, which is equivalent to the pressure as a function of time. And we can measure the time when this sound starts and the delay, or interaural time difference, when the second ear starts. So it's very simple to measure. Now that said, it's also very simple to assume that the human head is just a sphere. And it's simple to compute knowing the distance between the two ears and the angle of the sound source. It's very simple to assume that this human head is a sphere. And I think this solid line is assuming it's a sphere. And the dashed line with x's are the experimental data. The data are pretty close to the assumption that the human head is a sphere for interaural time differences. And here's the ITD plotted as a function of the angle. As you would expect if the sound source is straight ahead-- that is 0 degrees angle-- the ITD is 0. The sound takes the same time to get to the two ears because it's straight ahead. Now as you move the sound over to one side, it's going to get to one ear first and the other ear a little bit later. And so the ITD becomes bigger than 0. And it goes up. And the maximal ITD, as you would expect when the sound is directly off to the side, which is at 90 degrees. And what are the units here? You probably can't read them, but they go from 0 to 0.6. And those units are milliseconds. So the ITDs go from 0 to 0.6 milliseconds for the human head. So millisecond is a thousandth of a second. It's pretty small. It's less than 1 millisecond for the maximum ITD. And so you can quote this in terms of microseconds as well. So this would be a total, a maximal ITD of 600 microseconds. OK. Now, how about interaural level differences? They are in the lower panel. And these are all measured values. They're not computed. I don't know why, you could compute them easily. Maybe it's because of the pinna. The pinna introduces another piece of complexity that's a little bit different from the head being an absolute sphere. So these are measured. And these are, again, plotted as a function of the azimuth. 0 degrees is straight ahead, 90 degrees is off to one side, and 180 degrees of azimuth is behind the subject. And these, I said the ILDs depend on frequency. So these are separate plots for a bunch of different frequencies. Down at the bottom is a very low frequency, 200 Hertz. This is our standard center of the human hearing range, 1,000 Hertz. And this is a very high frequency at the top, 6,000 Hertz. And as you can see quite clearly, a 200 Hertz, like I said before, this frequency of sound bends around very nicely for objects the size of the human head. And so even if you have the sound source all the way to the right, The ILD for 200 Hertz is 0. You don't have an ILD, essentially, for such low frequencies. For our mid-human hearing frequency, the ILD starts out at 0, straight ahead, and climbs to perhaps 6 or 8 dB. As the sound source moves off to the side, it's not behaving like a perfect sphere. So a perfect sphere, the ILD would go up and be maximal at 90 degrees. So maybe this is the effect of the pinna. This is not doing that, but it's certainly climbing. The biggest ILD is found at these highest frequencies. So for 6,000 Hertz, the ILD climbs from 0 up to an ILD of 20 dB as the sound is located off to one side. And that's a huge cue for localization of sounds. So maybe you can remember back to the very first of my lectures, where we had the decibel demonstration. We changed decibel levels. I think this was a noise that we presented in changing steps of 6 dB. And it was clearly obvious when you change-- this is now binaural level at 6 dB. 3 dB changes in noise level were obvious. A 1 dB change in noise level as we stepped through was obvious after you'd gone through a few steps. But maybe from one to the next step was not obvious. Clearly, a 20 dB difference between the two ears is a huge cue. How can we confirm that that's a huge cue? Well, we can take a these cues and present them through headphones. And it's very easy to set up a circuit so that you have a signal coming into the two headphones. And in one channel, you put a device called an attenuator that attenuates the signal. So why not use an amplifier that amplifies the signal? Well, amplifiers are active devices. And if you amplify sounds, inevitably you add a little bit of distortion. So it's easier to just attenuate the sound. Attenuators are passive devices and you can cut the sound to one channel 20 dB and it sounds like a huge effect. And you say, oh, wow. That sound disappeared in one ear. It sounds like it's coming from the other. You can cut the sound by 6 dB, a huge effect. Cut it by 3 dB, a big effect. And you cut it by 1 dB and the person says, well, before when the sound was the same in the two ears, the sound was straight ahead. When you changed it 1 dB, it sounded like it just moved a little bit from straight ahead. So it turns out that 1 dB is not only our just detectable level in sounds when we present them to the two ears, but when we vary the interaural level difference. So 1 dB is our just noticeable difference for ILD. OK, we can play that trick with headphones. Do good tests on that. So clearly, these cues are very important at high frequencies because they're the salient cues. What about at low frequency? Since we don't have an interaural level difference, you would assume that we're going to use interaural time differences at low frequencies. And you'd be correct. Much of the evidence suggests that ITDs are used at low frequencies and ILDs, because they're big at high frequencies-- those are the cues used there. And why don't we use ITDs at high frequencies? OK, there is nothing in the physical characteristics of sound. For example, the sound velocity doesn't depend on frequency. It's constant no matter what the frequency of the sound. So you're going to get the same ITD for low frequencies and for high frequencies. Well, if you think about it a little bit, there is a reason where high-frequency ITDs are less useful. At some frequencies, this sound waveform is going to go back and forth so quickly that it might hit the right ear. And then by the time it leaks around to the left ear, it goes through a complete cycle. Then, what are we left with? We're left with right and left looking exactly the same. There's a big time difference, an interaural phase difference of 360 degrees. But we'll never be able to perceive that because the sound is, again, the same at the two ears. And it turns out, of course, that depends on how widely spaced your ears are. For the size of a human head, the phase goes completely through one cycle at a frequency of 1.6 kilohertz. And that's pretty centered in your hearing range. 1 kilohertz is about the middle of your hearing range. 1.6 is pretty close. So above 1.6 kilohertz, ITDs become ambiguous. And so we think of that as the time where ITDs fall out and ILDs become important. So ITDs are ambiguous for humans above 1 kilohertz. Now, that argument, of course, is for ongoing interaural time differences. If you start the sound, no matter what frequency, you're going to have an onset difference. You can start a 2 kilohertz sound. The left ear will get it first, and then the right ear a little bit later, depending on where it's located and the degree of separation of your two ears. But that's just one cue. And if these things are repeated back and forth thousands of times, depending on how long they're on for, for low-frequency sounds you'll get those thousands of cues. But for high frequencies, you'll just get the onset. So for high frequencies, we think of these frequencies as not being useful for ITDs for these ongoing cues, like interaural phase differences. OK, is that clear? Any questions? All right. Let's see how good we are in terms of performance. Sound localization is accurate and humans have excellent performance. How do we measure such performance? Well, we take an observer. We sit them down in a room or in free field where there's only the direct sound coming from the sound source. And no reflections off the walls or echoes off the ceiling or floor. And we say to this observer, OK, here's the sound directly ahead. And then we move the sound source. And this can be a physical speaker. We move the speaker over a little bit and so we change azimuth. And we ask the person, was that azimuth detectable? We say, is that the same or different from straight ahead? So if straight ahead is here and we move the speaker over to here and we say to the person, same or different? Same position or different position? And the person says, yeah, sure. That's different. And then we instead say, OK, straight ahead is here and we're going to move it less. The person says, sure, I can detect that. No problem. It moved off to the right. And this is, again, without any visual cues. So the speaker has to be behind a screen. So it's only auditory cues that allow you detect this change in movement. So we do it a third time, straight ahead, and we move it such a small degree that the person says, I don't hear that change at all. And we titrate the level of movement until we get what's called the minimum audible angle in degrees. And we can do that for a whole bunch of different frequencies. And when we do that with sound sources, such that the initial position is 0 degrees straight ahead, we get this curve linked by the black dots. Such that at low frequencies, the minimum audible angles approach, in the best performance, 1 degree of azimuth. An observer can tell the difference between 0 degrees straight ahead and 1 degree to the right or 1 degree to the left. It's very impressive performance. In the middle frequencies, where the ITDs are starting to break down and become ambiguous, our performance is not so good. Minimum audible angles go up to perhaps 3 degrees. And then they get a little bit better at the very high frequencies. And remember here, we said that the interaural level differences were very salient. So at 5 kilohertz, there was a 20 dB difference. And we come back down, maybe not to 1 degree minimum audible angle, but maybe to 2. So almost as good. And then at very high frequencies, we get worse again. Now, we can take these same subjects and we can go back to the physical cues. Let's say at a 1 degree minimum audible angle at low frequencies where ITDs are important cues, what is the interaural time difference that a subject can just barely detect? Well, if you had a magnifying glass, you could read it off this curve. Because 1 degree is a very small angle on this scale. But you can calculate it from the sphere, right? And it turns out that the prediction is that for the minimum audible angle of 1 degree, the interaural time difference is 10. Now, this is microseconds. It's calculated to be 10 microseconds. Where we perform the best, where we're using ITDs, this minimum audible angle, change in sound source of 1 degree corresponds to a change of ITD of 10 microseconds. That seems like an unbelievably small instant in time. But once again, if you take the subject and now instead of an attenuator in one channel, you have a circuit that delays one ear relative to the other. And you say to the subject, OK, with no delay, what did it sound like? Straight ahead. OK? Then you put your delay in and you say, what does it sound like now? And the subject says, it just barely sounds like it moved a little bit off to one side. And in fact, 10 microseconds is the just noticeable difference for ITD in headphones, which is evidence then that we are using these ITDs at the low frequencies for sound localization. Now, you can, as we said, play the same trick here. Minimum audible angle at 2 degrees. Go to the ILD curves. And I'll try to get your magnifying glass out here at 5 kilohertz. Look at where 2 degrees is. And it turns out that the interaural level difference there is 1 dB. Play it in headphones. The subject says, yeah, it sounds like it's just a little bit off center. So it all fits with these presentations in free fields and these interaural level cues measured in free fields with what you get in headphones. Now, we have some demos of these things. And I have several kinds of demos. So let's do the real fun one first. The problem with fun demos is sometimes they don't work. So this has worked in some classes and has not worked in other classes. I have a sound source. It has a lot of high frequencies. You can probably appreciate that, jangling keys. And I would like to be able to just say, OK, I'm going to just give you interaural level differences. Probably, that's what you'll get here with high frequency sounds. But I can't assume that it's also not going to have some ITDs because in this room there's going to be differences in time from the sound source to your two ears. So I'm really going to give you both cues, but let's start with this demo before we go on to headphones. I'm going to jangle these keys. And I'm going to ask you, because you are such visual people, to cover your eyes while I jangle the keys. Don't cover yet, because what I want you to do is to tell me where the keys are coming from with your other hand. I want you to point where you think the keys are coming from. And at the end, after you hear the sound, keep pointing, and look and see how good you were. OK? So everybody, cover your eyes. [KEYS JANGLING] OK? Now, uncover your eyes and keep pointing. OK, so we're doing pretty well here. Maybe 20 degree angle, but most of you guys are on. All of you guys are on. All right, let's do it once more. So blindfold yourself. Going to move the position of the keys. [KEYS JANGLING] OK, open your eyes and keep pointing. OK, you guys are good. Maybe like 20 degrees. I changed the elevation a little bit too, so you guys are making mistakes in elevation. You guys are good. OK. Now, since I claimed to you that we're using two ears, I would like you to plug one ear, blindfold yourself. Now, it's going to be a little bit of a trick to point, right? So I don't know how you're going to point. You can point with your elbow or you can do whatever you want to point. AUDIENCE: Close your eyes. PROFESSOR: You can close your eyes. OK, if you're truthful. You can close your eyes, OK. [KEYS JANGLING] OK, open up and we'll see how you did. You guys are good. There's an error. That's the biggest error I've seen so far. This is an error, like at least 45 degrees. This is an era here. So we're doing more poorly. Let's plug the other ear and I'll repeat the sound. OK, close your eyes. [KEYS JANGLING] OK, open your eyes and keep pointing. These are some errors. Oh, this is a big error. This is an error. OK, so we're clearly, anecdotally, getting worse. OK. Now, the ultimate test, plug one ear. And in your remaining ear, distort your pinna because I claimed that your pinna cues were helping you. And I really don't know how you're going to point here. [KEYS JANGLING] OK. Now, open. Just continue to point if you can. OK, so that's a 90 degree error. You guys are pretty good. You have an error here. You guys are good. This is an error. And where were you pointing? You were pointing correctly, yeah. So good. OK. Let me try it one more time with distorting pinna and plugging one ear. [KEYS JANGLING] OK, show me where you were pointing. You guys are good. AUDIENCE: Nice. PROFESSOR: This is a little-- was it harder-- AUDIENCE: It's impossible. PROFESSOR: It's impossible. So you were guessing? That's why it doesn't work. AUDIENCE: It all sounds the same. It all literally sounds the same. PROFESSOR: What's that? AUDIENCE: It all sounds the same. PROFESSOR: It all sounds the same with the pinna distorted. Right. OK, this is a huge error. AUDIENCE: Yeah. PROFESSOR: OK. I think it worked. Pinna is very important in distinguishing front versus back. Interaural time and level differences are equivalent for front and back. So how do we know what's front and back? Well, the pinna cues are very important for that. Otherwise, you have subjects reporting confusion between front and back. A lot of the times to eliminate front-back confusions, experimenters will require subjects to point in the frontal [? hemisphere. ?] And say, it's not in the back. I give you that it's in the front. OK, so we have more control demonstrations that can be presented during headphones. And we have demonstration of these two cues. And there are three demonstrations. So the first two demonstrations are ITDs. And the first one uses tones. So a tone is a sine wave. And it's going to give you tones of 500 Hertz and 2,000 Hertz. So the low frequency and higher frequency are heard with alternating interaural phases of plus and minus 45 degrees. So let me just remind you, we saw here an interaural phase difference of about 90 degrees. So this is going to be half of that. Interaural phase differences of 45 degrees for these 2 pure tones. The second one for ITDs is going to use clicks. And they're going to be repeated just like my hand clapping. Next, the interaural arrival time of a click is varied. And it doesn't tell you how big the ITD is. The apparent location of the click appears to move. And to me, it sounds like the clicks are going around. Somebody clapping their hands moving around in a circle. Now, when you listen to these demos with earphones or ear buds, sometimes you get the impression that wherever it's coming from, it's sort of like on the surface of your head. It's not way out there. That may be because you have lost some of the room acoustics. Certainly, you're not listening to reflections. But you've also lost the effects of the pinna, which filter things in such a way that they sound more roomy or out there. These things sort of sound internal, but they still appear to change in azimuth. And I think this click demonstration is the most vivid of the three. The final demonstration is interaural level differences. The overall interaural level difference of a 100 and 4,000 tone is varied. Now, heads up here. This low-frequency tone is never going to have an ILD in normal free-field acoustics. It's just never going to happen because of the size of your head. You have to have a huge head to cause a sound shadow for that low of a frequency. But you can certainly just-- as easily as for high frequencies, present that in headphones. And it is perceptually obvious, OK? So there's these three demos. So how many people have them on your machines and have earphones? OK, so you guys can just go right ahead. How does it appear? Are they in order? Do they have names? AUDIENCE: You can just download them from Stellar. PROFESSOR: Yeah. You can download them from Stellar if you haven't already. They should be on the course website. So just do them in sequence. People who don't have them on their laptops can listen up here. I have these listening stations. So whoever wants to come up and listen can come up. Could you hear them going around your head? That's, to me, the most vivid. Comments about the others? This last one is kind of piercing for the high frequency, right? And it still sounds like it's moving around. What about for this lower frequency? Did it sound like it was moving around? OK, even though that stimulus is not present in free-field sound. You will never hear that without headphones. It's perceptually obviously. Now, how about this ITD? To me, this 500 Hertz interaural phase difference is obvious, but the 2,000 Hertz wasn't. AUDIENCE: [INAUDIBLE]. PROFESSOR: Clear for you. Any speculation on why that would be true? AUDIENCE: [INAUDIBLE]. PROFESSOR: OK, what we're saying is then that this is the left ear for the low frequency and the right ear is, what? 45 degrees? So it's going to be sort of like this. OK, that's for maybe the lower frequency. And the higher frequency is quite a bit higher, right? It's more than twice the frequency. So it's more than twice as much. I'm a terrible artist, but it's going to go back and forth faster. And this is going to be delayed 45 degrees. So because this is going faster, it's a smaller time difference for the high frequency. So you might say, OK, it's just a smaller time difference. And that's why it's harder for us to distinguish it. But remember the concept that we had of phase locking of the auditory nerve. In the auditory nerve, the left auditory nerve is going to fire some spikes at some point in the stimulus waveform. The right auditory nerve is going to fire some spikes at a corresponding point in the stimulus waveform for the first cycle. OK, let's repeat it for the second cycle. Going to fire in there somewhere. This one's going to fire in there somewhere. And remember, cycles are going by pretty quickly. There's lots of time. So you can build up a response pattern where you have thousands of spikes there, thousands of spikes here. And these two auditory nerves are coming into the brain. They're synapsing in the left and right cochlear nucleus. Cochlear nuclei. The cochlear nuclei are then projecting centrally. And at some places, the left and the right-side pathways converge. There's a comparator then that's comparing the arrival time and it's saying the left ear signal is getting to me first and the right ear is a little bit delayed. What happens with the higher frequency? So this is an interesting high frequency. This is the 2,000 Hertz. What happens to the phase locking for frequencies above 1,000 Hertz? It declines, right? So instead of a nice synchronized pattern, this left auditory nerve is going to respond. The right is going to respond. But from successive cycle to cycle, the pattern is not going to be synced to a particular point in the stimulus waveform. So instead of getting a nice synchronized pattern, it's going to be unsynchronized. And maybe for some stimulus presentations, the right-side spike is going to come in earlier than the left side. And the comparator is going to say, WTF. I don't know where the sound is coming from, right? So the fact that phase locking breaks down means that the timing at wherever the central comparator is, it's not synchronized. Timing here is synchronized. The timing here is not synchronized. And we're claiming with this psychophysical metrics that we can detect a difference between the left and the right ear minimally at 10 microseconds. With synchronized patterns at least. Now, let me just draw-- it's not too surprised. Everybody understands why when you break down phase locking, you don't have a temporal code anymore? Let me show you the spike waveforms for one spike coming in in the left side and one spike coming in the right side that are delayed by this minimal time of 10 microseconds. So here is spike, let's say the left side. So to draw a spike coming in from the right side delayed 10 microseconds, I need to know the time base here. How long does it take a spike to fire in the central nervous system? What is the duration here? What's the time scale here? AUDIENCE: 1 millisecond. PROFESSOR: 1 millisecond, very good. OK, so now I'm going to draw a right-side spike coming in that's delayed by 10 microseconds. And it's pretty easy to do. I'm going to do it in a different color, but it overlays here very clearly. I didn't draw anything because on this time [? base, ?] they're almost perceptually indistinguishable. OK, so the right side here delayed 10 microseconds. That's the wonderful property of this system that we have relatively large, long inputs coming into the CNS. And we can, at the limits of our perception, distinguish delays of a very short time scale on the order of 10 microseconds. That's the impressive nature of this central comparator, wherever it is. And we're about to talk about where it is now. So let's look at where it is in the brain. We had our block diagram of the central auditory pathway before. And here's kind of a simplified diagram. This is the left side and the right side of the brainstem with the two auditory nerves coming into the two cochlear nuclei. And in the cochlear nuclei, we discussed that the auditory nerve synapses and cochlear nucleus neurons then pick up the message. Now, in one sense the cochlear nucleus neurons know only about what's going on on that side of the brain, or that auditory nerve. They're not getting inputs from the other side. So they're not binaural, if you will. The first places where you have binaural input in the auditory pathway are centers like the superior olivary complex. And we talked about that before, Superior Olivary Complex, or SOC, having a bunch of not only binaural inputs but a bunch of different sub-nuclei. That's why it's called a complex. One of the most important of those sub-nuclei is the Medial Superior Olive indicated here by MSO. And the MSO gets input from the left cochlear nucleus if it's the left MSO. And it also gets input from the right cochlear nucleus. And here is a very good guess, at least, for where the central comparator is on the basis of interaural time differences. And this was appreciated from very early time point. If you draw MSO neurons here, they have two dendrites. One's going to the left side, one's going to the right side. And they get numerous synaptic inputs onto each dendrite. If you make a lesion, so you interrupt the inputs coming from, for example, the right side, all the inputs onto the right dendrites drop off. So it looks like all these inputs are coming from the right side. They've dropped off when their pathway has been cut. And the left ones remain because their pathway is intact. So clearly, the MSO neurons get input from the two sides. Now, way back in the 1940s, a psychologist whose name was Lloyd Jeffress proposed the model for detection of the ITDs. And he guessed at several locations in the brain where this model could actually be present. And one of his guesses was in the MSO. And it turns out the MSO was the correct of his several guesses. And this is a very interesting model for neural processing of ITDs. And it has several important assumptions. First, the MSO receives input from the left and the right side, as we have just gone over. So these are axons coming in from the left side. And these are axons coming in from the right side. And the dots there are the MSO neurons themselves. The dots are very fussy kind of neurons. They don't just respond to any input. They're very discerning. They say, OK, I got some input from the left side. Not a big deal. We get some input from the right side, I'm not going to get excited about that. But I'm going to get very excited if I get input from the left and the right side at the same time. That is, coincidentally. And so the MSO neurons are sometimes called coincidence detectors. That is, they detect and they respond only when they get coincident input from the left and the right side. Well, how's that going to help us if we're delaying one side versus the other? Well, the second major component of the Jeffress model is that the axons providing input, which we now know are coming from the cochlear nucleus, they run down the length of the MSO. And as they run down the length, they give off branches to each MSO neuron in this long chain going from left to right. And if you know anything about spikes that are traveling down axons or nerve fibers, you'll know that the spikes don't get instantly to the very tip of the axon. But it takes them time to travel down the axon. And so for example in this axon, the impulse is coming down here and it gets to this leftmost branch first. And then a little bit later in time, it gets to the next branch. And so on and so forth until it gets to the rightmost branch at the longest delay. So Jeffress said, the inputs to the MSO are, if you will, delay lines. That is, axonal impulse propagation takes time. You can set these lines up so that they are delay lines. The inputs on the right have corresponding delays. Now, how big are the delays? And the flip side of that question is, how long does it take impulses to travel down axons? So another name for that is the conduction velocity in axons. Well, these are, let's say, myelinated axons of pretty big size, like 5 micrometers. So let's say they're myelinated, a large diameter. 5 micrometers, let's say. It turns out that such a conduction velocity for those kinds of axons is about 10 meters per second. And Jeffress was sharp enough to know that in the dimensions of the brain, those conduction velocities work out to predict about the right delay for the kinds of interaural time differences that we're talking about for sounds that differ in azimuth. So Jeffress, at the time he was postulating his model in the 1940s, there were good measurements of axonal conduction velocity. And he realized that these delay lines were pretty good for predicting or compensating for the interaural time differences. Now, how does this model work? Well, I have a little demo of the model, which is a movie. Which I have in a different PowerPoint. And I'm going to show this coincidence model. And I guess I didn't credit the person who made this movie, which is Tom Yin. And he works on the auditory brainstem and he's based at the University of Wisconsin in Madison. So what this demo will show you is you'll be looking down onto the brainstem. And the MSO on the left side and the right side will be present. The model is set up to demo the MSO on the right side. There will be a cochlea on the left and a cochlea on the right. The auditory nerve coming into the cochlear nucleus on the left and the auditory nerve coming into the cochlear nucleus on the right. And those two nuclei will be providing inputs to the MSO. And action potentials, or impulses, along these nerve fibers will be indicated by little highlights-- little yellow lights. And the demonstration will show you what happens to these incoming impulses that converge at the MSO for a sound that's straight ahead. So a sound that's straight ahead will strike the two sides at the same-- will strike the two pathways at the same. And you'll see what happens to the MSO and which neuron in the coincidence detector array lights up. Then, I think it'll play that same demo in slow motion. Then, the second part of the demo I think has the sound source displaced off to the left side. So the sound wave front will come and strike the left side first and the right side after a delay-- the interaural time difference. And you'll see what happens to the impulses and the MSO neurons with that second sound source position. So here are the two cochleas, left and right. Here is the MSO. This is the right cochlear nucleus. This is the left cochlear nucleus. This is the MSO we're not talking about. This is the right MSO. There was a sound wavefront that hit the two sides equally. And it was a little hard to appreciate, but I think this neuron up here in this part of the MSO lit up. This is going to be the same wavefront in slow motion activating the two cochleas at the same time, the two cochlear nuclei at the same time, and coming in to the MSO. This one gets there first because it's on the right side. And the two impulses arrive coincidentally at MSO neuron number 2. And that's the one, because it gets coincident input-- that's the one that fires off. Here's a second part of the demo where the sound is now located off to the left side. First, it's going to show you in fast motion and then in slow motion. Left cochlea activates first, right second. And now, the MSO neuron that got coincident input is located down here, neuron number 6 I believe. Now, it's going to show you that offset sound source in slow motion. Left cochlea gets activated first, right second. Left cochlear nucleus first, right cochlear nucleus second. Now, the delay lines are set up so that neuron number six in the MSO is the one that responds because it now is the one that gets coincident input. Is that clear? So what you've set up then in the MSO is an array of neurons where 0 ITD is mapped up here and left leading ITDs are mapped down here. You've mapped interaural time difference to position along the MSO in the brain. And that's the Jeffress model. So the Jeffress model has been tested experimentally by going in and recording from single MSO neurons. So easy to say and extremely difficult to do in practice. It's not absolutely clear why. It may be that the MSO neurons are small and there are thousands of big inputs coming to them, so that you get what are called big field potentials in your recordings and very small spikes. So the number of studies of actual MSO recordings can probably be listed on the fingers of one hand. So we don't have very much data. What data we have from MSO neurons shows clearly that the firing rate is dependent on the interaural time difference. And that's what this graph shows here. I'm sorry it's not very clear, but 0 interaural time difference is right here with the dashed line. The firing rate is plotted on the y-axis. These dots over here indicate the firing rate for just left ear sound. Or in the other one, just right ear sound. So there's not a very big firing for presentation of sound in just one ear or the other. That's consistent with the Jeffress model. Also, consistent with the Jeffress model is that if you get a particular ITD from the particular neuron that you're recording from, that neuron fires a great deal. And other ITDs elicit much less firing. That is, the delay lines didn't allow coincident input to come and excite that neuron. So firing rate that changes a great deal as a function of ITD is consistent with the Jeffress model. There, probably because there's so few data, the idea of this mapping along the MSO is not borne out by the scanty experimental distance. So we don't really know that there's a map as a function of a particular brain distance. This is the anterior-posterior distance. And they put this line here. Really, the data are all over the map. So it's not clear that there's an organized mapping. It is clear that there's a function of firing when you change the ITD. So there are some updates to the Jeffress model. And I'm not going to go through these in detail, but I want to point them out to you because this is part of the answer to your assignment. The assignment says, sort of here's the Jeffress model. Give me a quick outline of it. So that's just what I said. I've given you that part. The second part says, well, the Jeffress model is maybe currently under discussion. What are new experimental evidence-- new I mean from the last 15 years-- that's not perfectly consistent with the Jeffress model? And you should go to this paper, which is the assigned paper for today's lecture, in which they discuss point number 1, point number 2, and several other points. At least one other point is demonstrated in that paper to show some experimental evidence from MSO recordings which is not perfectly consistent with the Jeffress model. Or makes you think, well, maybe they Jeffress model is not complete, or is outright wrong. And these are recordings from Brandt et al. The earlier slide I showed you was from recordings in Cat. Cat has become less and less the experimental model. And these are now recordings from smaller animals which are more in vogue to use experimentally. In this case, it's from the gerbil, which is a popular animal. It has a big MSO, good low-frequency hearing where you use prominent ITD cues. And this paper clearly is a challenge to the Jeffress model. I don't think it completely rules it out, but clearly there are some data from this paper to suggest that it might not be everything. Now, the second part of the assignment. Actually, that's the second part. The third part of the assignment comes from some other experimental data that I'm not going to give you because you don't need to know about them. But some labeling studies have asked the question, OK, I'm going to inject a neural tracer into my cochlear nucleus neuron. I'm going to trace its axon and I'm going to find this nice, ladder-like delay line in the MSO. Those labeling studies haven't been particularly gratifying in that they don't fit this model so well. I say here, at first it was thought that there were delay lines from both sides. Labeling studies suggest that there is only a delay line for contralateral input. OK, so that's not exactly consistent with the Jeffress model. And even more recent studies since I wrote this suggests maybe there aren't even delay lines at all. So experimentally, someone doesn't see a delay line, that doesn't mean it's not there. It just means maybe it's not so obvious. But people have started thinking about maybe there are other ways to provide delays in inputs to the MSO that might make the Jeffress model work. If you have coincident detectors and they respond only when the delay a certain ITD-- matches the ITD. So the third part of the assignment asks you for other ways that you can think of to create delays to MSO neurons. And let me give you a couple hints to the answers that I'm looking for. I think you should think about the synapse between the input from the cochlear nucleus to the MSO neuron. How could you create different types of delays using synaptic properties? So this is kind of a thought question because these haven't been measured yet. Secondly, there is another way to create delays. And it comes from properties of the cochlea. And that brings us to our reading for today. We always have a reading. The reading is from this obscure document, also called our textbook. OK, and the reading is on page 61. Page 61 is the early part of the book. It's talking about the ear, the cochlea. It says, "But you may also note that the vibration of parts of the basilar membrane tuned to frequencies below 1 kilohertz-- very low-- appear time shifted or delayed relative to those tuned above 1 kilohertz. This comes about because of the mechanical filters that make up the basilar membrane. They're not all in phase with each other. If you look at Figure 2.4--" So everybody should look at Figure 2.4 in the text. So this is my devious way of getting you to actually open the textbook. You will see that the impulse responses of the lower frequency filters rise to the first peak later than those of the higher frequency ones. OK, so don't just quote the textbook in your answer. Tell me how you could get a delay that would make up for the interaural time difference using this cochlear property that's mentioned in the textbook page 61. And it's illustrated in Figure 2.4 of the textbook. That's another way people are thinking of an alternate to Jeffress delay lines. And then finally, because last time I thought that it was too easy-- you guys are too smart-- I added something to the assignment. This fits with my son's view-- my son is in high school and he says, teachers love to load on homework. The more, the better. So I added something to make this assignment more challenging. And here's what I added. And so I think this has now been posted on the website. It doesn't add a great deal of difficulty, but I think it makes it more relevant to our course. I haven't updated this. OK, well, look on the website, course website. I haven't updated on my computer yet. Look on the course website. The very last part of the assignment, there's one sentence that says, how would a cochlear implant user have trouble using the Jeffress model to localize sounds? Even if the cochlear implant user had a cochlear implant in the left ear and a cochlear implant in the right ear? And I think this is a fair question. We spent a lot of time on our course talking about cochlear implants. And cochlear implant processing is clearly very different than we, as normal hearers, have the processing on our auditory nerve. So think about that. How would the Jeffress model not be able to be used very well by a cochlear implant user who had implants in the left and right side? So this is a written assignment. I think before we talked about how long it should be. And I can't remember how-- maybe five pages is plenty. In the very beginning, it talks about the Jeffress model. So give me a quick sketch. It's due on December 4, which is the day of the lab tour. And you could send them to me by email or you can bring a printed copy to the lab your. Or, you can bring a printed copy to my office, but that's at Mass Eye and Ear where the lab tour is. And the idea behind having it due then is I can look them over and grade them, and then we can talk about what I thought is the correct answer to this at the review session, which is the class after December 4.
MIT_904_Sensory_Systems_Fall_2013
12_Motion_perception_and_pursuit_eye_movements.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. Good afternoon, everyone, again. Today, our session is going to be on motion perception and pursuit eye movements. Now, I want to remind you that to our next class next Monday is going to be a whirlwind review. And then the next Wednesday we are going to have the midterm exam on what we have covered so far in the course. And as I've mentioned already to you a couple of times, that is going to consist of a series of multiple choice questions in which you have to circle the appropriate choice for each question. Then the following week, which is going to be November 28, is when Chris-- AUDIENCE: October. PROFESSOR: Huh? AUDIENCE: October 28. PROFESSOR: Sorry, October what? AUDIENCE: 28th. PROFESSOR: October 28. Yeah, October 28-- that's when Chris Brown is going to start the second half of the course on audition. So that is the general plan at this point. And again, I'm going to send around an attendance sheet. So just put your name on it and send it around, please, for me. Thank you. All right, so I think you're going to find talking about motion perception a lot of fun. I have several demonstrations. And it is certainly have a very interesting topic. And I'm going to highlight that by first of all telling you what we are going to cover. And then we are going to talk about the basic mechanisms of direction selectivity in the visual cortex. So first of all, as I've said, we're going to talk about the neurons in the brain that code motion. I'm going to talk about it mostly for in the cortex. But I would also say a few words about direction selective cells that you find in the retina that we had mentioned before. All right, so we are going to then discuss briefly the possible mechanisms that make it feasible to process motion. And then we are going to look at what happens when you make various brain lesions in determining how well you can process depth. Then we're going to talk about a very interesting topic, which is structure for motion. And then further pursuing that, we're going to talk about apparent motion. And then we are going to talk about an interesting temporal effect which is called metacontrast. And then we are going to talk about optokynetic nystagmus, where we link the visual input to the generation of pursuit eye movements. Then that is accomplished by the so-called accessory optic system that I will describe to you. And that is also the topic that, if you remember, you will be writing a paper on. Lastly then, we are going to have a summary. OK, so let's look first at the neural responses to motion in the cortex. Now, one of the most remarkable features of the visual cortex and extra cortical areas, as I had already mentioned, is that the majority of cells in these areas are direction selective, meaning that they respond to one particular direction of motion, but not to the opposite direction. Now, this is extremely evident in the visual cortex, where the so-called simple cells, virtually all of them are direction selective. So that's one of the major transforms, if you remember, of the input that you have from the lateral geniculate nucleus that goes to the cortex. That input is just from single cells, on and off single cells, midget and parasol cells that have circular receptive fields that centers around antagonism. And when they get up to the cortex, a transformation takes place by virtue of intracortical circuits that turns these cells into direction specific ones and cells that are orientation specific. So let's look at this first by starting with a method so that you'll understand exactly how this is done. In this case, what we are going to do is we are going to have a photo cell. Let's just look at the top one here, A. You're going to have a photo cell that responds both to a light edge and a dark edge. And so when this bar moves across this photo cell, it will activate it. And what they're going to do-- we are going to move it across in one direction. And then we are going to move it back across in the other direction. And this is shown here. This is a time scale. And it shows that when this edge goes across, it activates the photo cell, gives it the beep. And then when the trailing edge goes across, it also does the same thing. So this is the light edge response. This is the dark edge response. Now, when you're going back the other way, you're going to have, again, the same arrangement. In the opposite direction, the light edge responds first. And the dark edge responds second. Now we are going to make a change so that it mimics the organization of simple cells in the visual cortex. We're going to put two photo cells, one which responds to the light edge and a dark one which responds to the dark edge. So now when this bar goes across and activates these two photo cells, what happens is, because they are not all centered exactly in the middle, it takes a longer time to activate the two successive [INAUDIBLE] when the bar goes across this way and a short time when it goes across the other way. Now, the opposite happens when you have a dark bar, because then the dark bar doesn't activate the light sensitive photo cell. It activities the dark one. So it takes a little longer to start that. So that then shows you the temporal arrangements of these responses if when the photo cells are arranged in this fashion. Everybody understand that? OK, so now we are going to do this for the real. We are going to look at a cortical cell. And there are several different subclasses of simple cells. And the first subclass we can refer to as an S1, S for simple, OK? So the simple cell there is an example, very similar arrangement. Here's the receptive field of the cell, undefined. And you either put a light edge across back and forth or a dark edge back and forth. This is the light edge response. This is the dark edge response. Now, you can readily see what happens here, which is quite remarkable. The cell responds only when the bar moves upward across the receptive field and does not respond when it moves downward. So this cell is over 100% direction selective. Got it? All right. Now, the same thing happens to the dark edge, all right? And so this cell is one that responds only to the dark edge, OK? In this case, it's the trailing edge. In this case, it's the leading edge. Now, at the bottom here, you draw out what the receptive field looks like. This is in 10th of degrees. And so this is pretty much just the spatial response of the cell. And it only responds in this direction. There's nothing drawn in the other way. Now I'm going to show you another cell, which is going to be a so-called S2 cell. Now, this cell is different in that it responds both to the light and the dark edges. And the fact that they're-- in this case, a light bar, in this case, a dark bar, the fact that they're-- displaced temporally mean that they're side by side for the light and dark responses rather than overlapping. And that, of course, is the definition of a simple cell. And so if you draw out the receptive field of this cell, once again, the cell is almost 100% direction selective. And this is what its response looks like in space. Now we are going to look at yet another one. And this is going to be a surprise. It was certainly a surprise when this was discovered, because it's still not understood why we have these kinds of cells. Here is one-- you can look at that-- that responds to a light edge in this direction and a dark edge in this direction, OK? So both subfields are direction selective. But they're direction selective in opposite directions, as depicted here. Now why on earth would you have cells in the visual cortex that respond selectively to light and dark edges in opposite directions? That's something you can contemplate. There is no definitive answer to that, and so I'm not going to belabor it any further? Now what I can do is I can provide you with a summary. I showed you this one, this one, and this one. Now, you don't have to worry about those. But I would like you to look at this last one, which is a complex cell. In a complex cell, the light and dark edge responses are in the same location, space-wise, as shown here. And this particular complex cell that's a real cell is 100% direction selective. Now, it's true not for all complex cells, but the majority of them that they too are direction selective. So I can safely tell you that one of the major transforms that happens in the brain in vision is to convert the signals form the retina into direction selective cells. And therefore, we can only surmise that to have direction selective cells is extremely important for us to be able to process motion information in the world. So now we are going to move on and look at extrastriate cortex. And we already talked briefly about MT and MST. In these two areas, we have even an stronger disposition for cells to be direction selective. And here's an example of a cell in MT-- in this direction of motion, a vigorous response. This is a cumulative response. In the opposite direction, it actually has some inhibition, as you can see here and here. Then if you recall from MST, which is an area yet further removed from area V1-- it's the middle superior temporal area-- what you find is, again, very strongly direction selective cells. Most of them-- maybe not all of them, but the overwhelming majority of them-- are direction selective. But they have much, much larger receptive fields than cells do in MT. And as I've mentioned to you before, one of the basic rules is that as you progress from the V1 to higher cortical areas, the receptive fields in these progressively higher areas become bigger and bigger and cover more and more of the visual space. So here you can see this is shown in terms of the visual field here. This is way out. This is 20 degrees just from here to here. The receptive is gigantic, actually. And there's a beautiful, very lawful arrangement and even though many, many cells feed into this particular cell from earlier portions of the visual cortex, they maintain or get a specific input from all of the cells that have this direction selectivity. Now, whether this direction selectivity is a product of the input from the cells, which have the same kind of directions, like many, many of these cells-- maybe 20 or 30 or 40 or 50 of them, OK?-- which have the same orientation and direction, or whether orientation and direction selectivity are created anew in MST is something that can be debated. It is still not 100% certain. So the only thing we have to worry about this time is to realize-- maybe not even worry. That's too strong a term-- is to be aware that there is prevalence of direction selectivity in both MT and MST. It's very beautiful and very, very specific, as you can see it here, even when you have a huge receptive field. So that is the basic story then. And now we can ask the question, how on earth is direction selectivity created? Now, there are several models that have been proposed. And most of these models-- I'll show you a simple one just so that you have a sense for it-- most of these models assume, I believe correctly, that you need to have inhibitory circuits that are selective for direction to produce this specific response in these cells. Now, the amazing thing about all this is that already in the retina you have some cells that are direction selective. And these cells I'll talk about a bit more when we talk about the accessory optic system. These cells in the retina are not very numerous. In the rabbit that has approximately 350,000 cells in each retina-- can you imagine a little rabbit? And it has 350,000 cells. We have a little over a million, OK? And among those 350,000 cells, about 7,000 are the so-called cells of Dogiel. And actually, I'll come back to that. And those are direction selective cells. So a very clever experimentalist examined these direction selective cells in the retina, a fellow called Nigel Daw. And he asked the question, what can I do to determine what the mechanism is of direction selectivity? So he did an experiment that sounds simple. But let me tell you, it's a very difficult experiment. When you put your electrode into the retina, you can't see direction selective cells. So you're on a hunting expedition. So you keep recording from cell after cell after cell. And if indeed only 7,000 among the 350,000 are direction selective, it will take you quite a while before you manage to record from a direction selective cell. So it's a lot of very hard work. So anyway, if you do that when your hypothesis is that some sort of inhibitory mechanism is involved, then you can do an experiment and say, well if it's inhibitory mechanisms, probably some sort of GABAergic system, we need to put some agent into the retina that blocks inhibition. We talked about that before, that that can happen. And so in this case, what they did-- let me come to that first. Let me see. Here we are. I'll come back to the others in just a minute. What we have here is one of these cells. And we move in this direction here and in this direction for this. And you can see that this cell is very strongly direction selective, maybe 100%, but maybe 80% or 90%. Then he injected picrotoxin into the eye that blocks GABA. And lo and behold, direction selectivity is virtually eliminated. And then when picrotoxin got washed out, there was recovery, all right? So now, on the basis of this finding-- it's a beautiful finding-- it was established that indeed, inhibitory circuits play a central role in giving rise to direction selectivity. And you can imagine that in the course of evolution, the pressures had to be tremendous to create these direction selective cells, because the circuitry is very complicated. You have to create these inhibitory neurons which inhibit in a particular direction rather than just randomly. So that's what was done. And so I'll show you one model that describes this. You have a bunch of-- this is for the retina. You have a bunch of receptors here. And they activate a retinal ganglion cell here. But they also connect with an inhibitory interneuron in one direction, in that direction. So now if you move a bar in this direction, as shown here, you get no response. But if you move a bar across the other way, you get a vigorous response, as shown by the action potentials here. So this is a fairly simple model of creating direction selectivity by virtue of inhibitory interneurons. So that then was also examined in the cortex. it was found that there too inhibitory interneurons play a central role when direction specificity is created in a cortical cell. So that now makes me come back to a further analysis of the basic nature of motion. Can we logically divide motion into several types, or what? And so it was proposed by a wonderful scientist called Wurtz at the National Institutes of Health that he could-- he decided that you can distinguish between planar motion, circular motion, and radial motion. Planar motion is when something moves straight. That's what we've talked about so far only. And of course, you can do that. In this case, he shows four different directions. But of course, there'd be all kinds of directions that are not vertical or horizontal but are diagonal. And there are many, many cortical cells. Some of those actually I showed you had orientation cell activity that was at an angle. So you have a huge number of different angles. And these cells hypothetically then would respond to planar motion. Then you have another type of motion, which is called circular. And let me tell you, when you see circular motion, obviously, if you were to be spun around or something, you'd see it. But it's not uncommon to see something rotate. So if you have a spinning wheel or if you watch a tire rotating in motion, that's circular motion. And so the hypothesis is that there should be cells in the brain that selectively respond to circular motion, either clockwise or counterclockwise. And lastly, of course, the so-called radial motion-- and you encounter that just about every day. Whenever you drive, you see trees. Or if you're driving out in the country, you see trees going past you, so to speak, as you drive forward. And that would be motion which would be outward. Or if you were to back up like this, then you would get this kind of motion where things would come together, OK? So that kind of motion is certainly one that you need to somehow code. And so another question came up once he came up with this hypothesis that these different kinds of motion-- the question is, are there cells in various parts of the brain, especially in higher areas like MST, that are selective for these three basic types of motion? And so they began to do an experiment. And I'll show you some summary data here. Here is a cell when you study planar motion. And these are the eight directions of motion that were studied-- sorry, four directions of motion on the top-- then the circular one on the next two, and the in and out motion at the bottom. This particular cell seems to be specific for planar motion from right to left. Now here is a cell that is selective to circular motion, OK? And lastly, here is a cell that's elective to radial motion. So if you only had recorded in your life from three cells in MST, and it happened to be these three cells, you would say, ah, my hypothesis is right. There are three kinds of cells in MST that respond selectively to these three classes of motion. So everything is incredibly beautiful and logical. But of course, that's not how you do your experiment, is it? The way you do your experiment is you record from MST in this case, and you record from many, many, many cells. And you test them just like this, OK? And if you do that, you realize that this is just an example of three cells that were specific. It was found that 40% of the cells responded to all three kinds of motion. 305 responded to two types of motion. And only 20% of the cells responded to one type of motion. So therefore, these cells, as is so often the case in the visual cortex, are not unique and specific to analysis of one thing. These cells can tell you about all three of those. And what happens in reality then is that when something moves, you activate thousands of cells in MT, MST, and visual cortex, maybe tens of thousands. And it's some sort of complex relative activity that then eventually gives rise to the fact that you say, oh, yeah I see planar motion. Oh, yeah, I see a circular motion, or I see in and outward motion. So it's not a specific single cell that tells you or gives you that impression, but it's the concerted activity of thousands of cells even for the simplest of cases. So that's the way it works. It's a complex business. And the idea that was popular at one time, namely that you have feature detectors in the brain, has become downgraded progressively with increasing research. As I mentioned to you before, when we talked about to extrastriate cortex, the initial idea was when the various extrastriate areas had been discovered, that each area specializes in one particular thing. And then it turned out that that is not the case. There may be shifts or trends, but in each area, many different things are analyzed and thousands and thousands of cells are active whose relative activity is somehow what gives rise to the percept that you have. All right, so that then is the essence of the layout of the responses and the models that this model that [INAUDIBLE] told you about, or something similar to this. It may be a bit more complicated. And some people have hypothesized that there are temporal delays necessary to produce this thing. Yes? AUDIENCE: Sorry. There was a 10% missing in the breakdown. PROFESSOR: Sorry. Are you talking about this? AUDIENCE: Yeah. PROFESSOR: Well, that's-- 10% of the cells were unclassified or we're not direction selective. These they could with certainty say what the situation was. And maybe some of those cells they lost while that we're recording from or they didn't have any particular direction selectivity. And because of that, they couldn't come up with 100%, of course, yeah? Yes, you had a question. AUDIENCE: It was the same question. PROFESSOR: What? AUDIENCE: It was the same question. PROFESSOR: Same question, OK. Very observant-- good for you. All right, so now we are going to move forward and we're going to look at the effects of lesions on motion perception. One of the ready ways you can assess to what extent various higher cortical areas play a role in various neural mechanisms is to either reversibly inactivate them or to remove them by making lesions and then determining what happens. And if you remember, I talked about this a bit. I told you, for example, that the hypothesis that was made initially about area V4 is that it's a color area. That was a single function hypothesis. And then it turned out that when area V4 was removed, it was found that there was only a moderate to mild deficit in color vision. But there were all kinds of other deficits that we had discussed in previous sessions. So now we come to the question about, what about motion perception? When we talk about emotion perception, we of course are dealing with temporal things, because motion involves successive activation of cells. And because of that, it's also very important to study how well we can process rapid successive presentations as, for example, in flicker. And that brings me, of course, to the story-- and we will talk about that in a minute-- that in the olden days, when movies were first invented, the rate at which successive frames were presented was 16 Hertz, which was fairly close to having a sensational of flicker, all right? And because of that, what happened when a guy asked a girl to go the movies? How did the guy ask the person to go to the movies? Can anybody remember that? That's an olden thing. The guy would say, hey, how would you like to go to the flicks tonight? OK, you've heard that expression, go to the flicks, right? Well, nowadays we are not aware of flicks anymore. And there are two reasons for that. Anybody know what the two reasons are why no one can see clicks in the movies and you no longer go to the flicks anymore but you go to the movies instead? Yeah? What are the two reasons? Anybody know? OK, well, let me tell you. The first is that the rate of the number of frames per second had been increased to about 24, OK? But that's still slightly below the effusion. OK? You can still see some flicker. So how do you get rid of it? Well, let me tell you, some incredibly clever guy-- some things are really, really clever. I'll tell you what this person came up with. He said, all right, what we'll do is we're going to create a shutter which is round. And it's going to have one opening here. And then we're going to have a second opening here. And then we're going to have a third opening here. So this goes around. And each frame you're going to show three times. And then as this goes around, you go to the next frame, and you show it three times. So actually, if seeing frames 24 hertz, you see it 3 times 24, OK? But each frame is shown three times. Now, to be even more clever, the way they did this, they took two of these circular shutters. And one of them moved in this direction and the other in that direction. They sort of equalized from the center out as the light was expose from the projector. So that then reduced the rate of flicker. So therefore, we can next ask the question, what is the flicker rate at which you no longer can see anything? I mean, do you want to see flicker. Or do you not want to see flicker, and so on? And so experiments were done, hundreds of experiments in which a psychologist studied flicker rate, varying contrast, varying color, doing this, doing that, and then varying the spatial frequency and generating curves to see what they look like. So that's the process. And so what we can do next is to examine this. And one way to do this, for example, just to give you a sense, you can have, for example, a bunch of random dots, just like we had for the [INAUDIBLE] stereogram. And this would be done with a monkey, for example. The monkey fixates. And then you start something in motion-- ready?-- like that. And the monkey sees this and has to make a saccade to it. And when he does so, he gets a drop of apple juice for reward. And then what you can very are two things. You can vary the frequency. I'm showing another one, much higher frequency. Bang-- the monkey goes there and gets rewarded. Or you can-- and/or, actually-- you can vary the contrast of the display, OK? So once you have a sense of when you get fairly close to sort of the edge of things, then you can also vary the contrast. So now let's ask the question, what happens when you remove area MT and when you remove V4, OK? So think about that for a second. And then I will show you the actual data, all right? OK, here we go. Here is an example. This is motion detection. And here we vary percent luminance contrast at a fixed frequency. You get the same effect the other way. I'll just show you this one. And you can see very readily that the V4 lesion caused no deficit at all, but the MT lesion produced a huge deficit, OK? So clearly this establishes the fact that area MT plays a central role in the processing of flicker information in being able to analyze rapid temporal events. And now we can ask the next question. What happens when you actually do just stationary flicker, all right? So when you do stationary flicker, you can do that in two ways. You can do an on off flicker, or you can do a flicker between two colors, for example, all right? And you can see why that would be interesting. Because if you do flicker between two colors, like red and green, if area V4 were important for color processing, then a lesion in that area should mess up that kind of flicker. So that's the way to do it. So let me show you first of all how you do this experiment. Here we have it. You have a bunch of LED's. Why does one have LED's instead of just doing this on a monitor? Any thoughts? Yes. AUDIENCE: The frequency at which those [INAUDIBLE]. PROFESSOR: Very good. The reason you do it is because your typical frequency rate is the alternating current rate, meaning 60 Hertz. Now, nowadays you can of course buy devices that can double it or even quadruple it. But back then it was easier to do it this way. And it still is, because you can vary these flicker rates in small amounts. Now, each of these LED's, actually, for this particular set of experiments, is one that has both red and green in it, OK? So you can flicker between on and off, let's say for green. Or you can flicker between red and green, all right? So that's what we are going to find out. What happens when you do these kinds of flickers, red green as well as on and off? And ask the question, what happens when you make, again a V4 lesion or an MT lesion, OK? Now think about it for a minute, as to what you would expect in terms of the differences between on and off and red green flicker and possible differences in the lesions that cause a deficit in this, OK? All right, ready? Here we go. There is the flicker. You can see that. Now, the way that really works is that the overall flicker-- I'll show it to you again-- is done in such a fashion that the mean illumination level is the same so that you cannot tell the location of the flickering LED on the basis of a lumen exchange. You can only tell on the basis of the fact that it flickers or it doesn't flicker, OK? All right, so we are now ready for the experiment. And here it is. On the right, I'm showing you a green on and off flicker. And you can see that there's no deficit at the V4 lesion, but a gigantic, absolutely gigantic deficit at the MT lesion site. And then very surprisingly-- to some people at least who were convinced that V4 is a color area-- when you do that for a red green flicker, first of all, notice that the flicker rate that you can detect is much lower-- OK, this is the ratio of the flicker, much lower, OK?-- than it is for the on off flicker. And secondly, again, there's no significant deficit with the V4 lesion. But there's again a major deficit with MT lesion. So area MT is essential for being able to process both red and green flicker as well as on and off flicker, all right? So bear that in mind as we are going to now progress to the next stage of the experiment-- set of experiments I should say. And the next step I'm going to talk about is structure from motion. And that one is a very important attribute that we have. We can reconstruct structure by see nothing but motion. And I'm going to show you an example of that. OK, let me tell you, this here actually is a scene, a made-up scene, that's outside. And somewhere in here we have ladybug hiding. But the ladybug is not in motion. And you know, ladybugs have these spots on their back. And there's a ladybug here. And I'm going to tell you that there's a monkey watching this, hoping that he could catch the ladybug to have it, because he loves to eat ladybugs. Are you ready to catch her? OK, here we go. Ready? The ladybug after a while got tired of just sitting there and began to move. And that's a dead giveaway, because you have all these cells in your brain that selectivity see motion and flicker. Are you ready? Here we go. There's the ladybug. See her? OK, so the monkey is happily eating away the ladybug, all right? So here is a good example of how incredibly useful motion information is that you have in these visual cortical areas. Now, I would bet that if this monkey had had his MT removed, he wouldn't have been able to catch that ladybug. Now, another set of experiments carried out, beautiful experiments, were carried out by a fellow called Johanson-- I think that was in Sweden-- who did a very clever set of experiments. He wanted to analyze how well we can process motion information and derive structure from it. And what he did was that he put little teeny little light bulbs on about 13 points in the human's body. He put some here on the elbows, put some at the hand, put some at the head, and so on down, 13 of them. And then he turned off all the lights, so the only light was that little spot of light that was on these 13 spots of light that were on the person. And then he took a movie to see what happened when this person started to walk. And he discovered something amazing, that we are incredibly capable of reconstructing shape from minimal motion information. So are you ready? OK, here we go. So there it is. There are your 13 little spots of light. And now I'm going to set it in motion. Ready? So what do you see? You see a person walking on this very minimal information. And you can even tell-- who would say it's a male and who would say it's a female? Those of you who say it's a male, raise your hands. OK, it's virtually everybody. Right, very good. Now, so what he did which I don't have a movie of, which is quite remarkable, is that he took a couple, a male and a female and had them arranged in exactly the same way-- each had 13 of those bulbs-- and had them dance. And when you looked at that, you had no trouble telling which was the male and which was the female. Amazing. So we are able to reconstruct a tremendous amount of form detail on the basis of extremely limited information about motion. All right, so that then ends that session. And we're going to go next to what is called apparent motion, all right? So you're going to have various forms of apparent motion I'm going to tell you about. And the first one is that if you show these two dots, if I flick them back and forth, you have a very strong sense that the thing is going back and forth, right? You don't see one, two, going on and one going off, because of the right temporal succession. But you see them as just jumping back and forth. So now what we can do-- we can enlarge on that and carry out a series of experiments on the bases which can make some inferences as to whether areas V4 or MT are contributing to this. So this is going be fun for you. And I want you to pay close attention. Now, in this kind of experiment when it was done, first, this stayed on all the time. And these two dots came on first. And then these two dots came on second. Now, if you do that, you can see either of two types of motion. You can either see what we call zigzag or seesaw. Seesaw is going to look like that, obviously, and zigzag like this, OK? So that's what I'm going to show you. And once I set this in motion, you will see it one way or the other. Then if you see it's, say, seesawing, you can take your finger and move it horizontally back and forth on top, say. And then in short order you're going to be seeing this. You can reverse the sense of motion, all right? So are you ready for that? OK, so here we go. If you see it one way or the other, try to move your finger back and forth to change it to see it. If you see it horizontally moving, move it vertically. If you see it vertically, move it horizontally. And then you can see it shift. Everybody able to reverse it? Well, keep doing it. And you should have no trouble reversing it if you move your finger back and forth. This is a bistable situation. And people can readily see the switch. OK, most of you seeing it switching? All right. So now we can ask a whole bunch of questions. And the way to put this question in general is we can ask, is first of all, is this something that happens locally, or does it happen everywhere when you're looking? So how can you do that? So let me show you. What they we do here instead is we can put up a bunch of them, OK? And they're quite far apart from each other. And depending on how close you are, they may be as far as a centimeter apart in the visual cortex for the activation. So now what we're going to do is we're going to move them back and forth just like I did before. And the question that you want to ask yourself-- am I going to see these all move the same way vertically, say, or see all of them move horizontally, OK? So here we go. Are you ready? Basically, the rule is that they all move the same way. And then they shift. Again, you can use your thumb or your forefinger. And if you manage to make it switch, they all will switch at the same time. OK, everybody manage that? All right, so now the rule is that even though the activation is very far apart in V1 from here to here, there's some organizing principle that makes you see everything a particular given way. And they shift. They all shift. Now, that's a very interesting arrangement. And you can wonder, what is there that can tell the visual cortex how to see it and insist that you got to see it the same way? Now we can move on. And you can ask the question-- so far, what I've shown you is they're all the same-- sorry. That's my fault. They are all the same size. And they are geometrically arranged, OK? So now what happens if instead, we do it differently and we make them different sizes and random locations? What happens under these conditions? Now think about it for a minute. And then I'm going to show it to you. And once again, even though they're different sizes, different locations, they all go the same way. And when they shift-- again, use your forefinger to try to make it shift. And when they shift, they all shift at the same time, OK? All right, so now we can move on and ask some interesting additional questions. We can ask the question, how important is color information for you to see it go one way or the other? All right? OK, so to do that, here we have the top two are colored red and the bottom two are colored green. Now we're going to move these back and forth. And the question is, will you see them move horizontally because of the color information or what? So the question is, again, are you going to see zigzag or seesaw? OK, are you ready? And you can readily see if, again, you can readily shift it again with your forefinger, that color doesn't seem to be an important determinant of the direction in which the apparent motion will take place. Everybody follow that? Well, now think about that. How can we do this experiment to be more convinced that color for some reason doesn't seem to be important in how we see apparent motion? Well, let me tell you how you do it. Here is an example. Here we have again four rows and four columns, all right? And the first, one A and C, the colors are horizontal. And in the second one, B and D, they are vertical. So if color plays a role, we should be able to see these, the first and the third rows go in one direction and the second and the fourth row go in the other direction. So let's examine that. Ready? And you can see that they're all going the same direction anyway. When they switch, they switch in the same direction, meaning that indeed you pay no attention to the color itself, OK? Some mechanism is involved in which the apparent motion you see does not play a significant role in color. Now we can do another experiment and see what happens when instead of colors, we use small shapes, X's and O's which are horizontal in the first and the third row and vertical in the second and the fourth row. Will this succeed in breaking up the unity of apparent motion? So here we go. Ready? And you still see them all the same way, meaning that you don't pay attention to the small differences in shape. OK, so now what I have shown you so far, remember, has something to do with the midget and the parasol systems for the test. The midget system can process fine detail. And the midget system can process color. The parasol system does not process fine detail and does not separately process color. So we'll come to that in a minute. Now let's ask the next question. What happens if you vary the size? OK, again, the first and third row, the size difference is horizontal. And the second and fourth row, it's vertical. OK, so let's see what happens. Here it's a bit unclear, OK? So as you see, some of them go one way. Some go the other way. And some of you may see it all go the same way, depending on how far you are. So let's change the ratio so it's much, much bigger. And if you do it now, what you see is dramatic. You see horizontal motion in the first and third rows and vertical motion in the second and fourth rows. Everybody see that? OK, so we can enhance on this maybe a little bit by adding these together, adding size and color and shape, quite large differences in size and shape and color. Then if you do this, you will indeed have fully succeeded in breaking up the unity of motion, with the first and third rows going horizontally and the second and fourth rows going vertically. Now then there's one other important factor that we need to consider about this. And that has to do with proximity. So in this case, what we're doing now-- in the first and the third rows, we have the horizontal dots closer to each other. he second and fourth rows, we have the vertical dots closer to each other. So if you do that, you're going to ask the question, how important is proximity? And if you do that, you don't see much so far. But now what I'm going to do is I'm going to make the difference even greater, OK? The first and third are very close horizontally. And the second and fourth are very close vertically. If you do that, you readily can see that indeed the first and third rows move horizontally and the second and fourth rows move vertically. So proximity is a very important factor. And so that then brings me to an interesting question that probably none of you have given much thought. But I will make you more aware of it. The fact is that when you see movies, you very often see a car or a chariot or something move forward, but the wheels are going backwards. And you certainly do that when you watch those endless ads on television about cars. And you see the wheels going backwards most of the time. And the car is going forward. And some of you say, well, why would I buy a car whose wheels are going backwards? So at any rate, let's examine that question. And what I'm going to do first is I'm going to show you a movie. Ready? This a famous movie, Ben-Hur. You've probably seen it. Look at the wheels going backwards. Can you see that? Now, of course since I have given this a lot of thought, it drives me nuts when I see wheels going backwards when the chariot or the car goes forward. All right, so now the question is, why do we see wheels going backwards in the movies or on the ads on television, whereas when you see it in the real, you never see them going backwards? Well, remember that when you look at a movie or television, you always see apparent motion. You never see real motion on television, never see real motion in the movies. You're not aware of it, but you don't. You only see apparent motion because you have successive frames in each of which there is a specific, say, if it's wheels, rotation, OK? So let's look at that analyze this. So I'm going to show you a wheel that is rotating slowly. Are you ready? it's a very brief. OK, you can see that wheel is going forward, right? Now I'm going to show you the same and arrangement. But I'm going to have it move fast. Ready? Ready, go. That time you saw it go backwards, right? OK, so let's analyze this. What happens when you have slow motion? Let's imagine that the red line is designating one of the spokes. Then if it moves slowly, the next frame is going to show that displaced only a little bit from the red, and so on. And because of that, you're going to see forward motion. But now if what you do instead is moving fast, what happens is that this spoke here is going to be closer to this one in the next frame. And because of that-- of course, that's all of them all the way around-- you will see them go backwards, because of the principle of proximity that I showed you with the apparent motion of the four moving dots. OK, so that then explains to you why you see wheels rotating backwards in the movies and on television. And it's due to a fact of the proximity of successive elements, in this case, spokes that are rotating as the picture is taken with a camera. All right. So now I'm going to come up and tell you briefly about another very interesting phenomenon that is closely related to motion. When two successive stimuli are presented, there can be an interaction between them, all right? And in a famous series of experiments back in the 1930s, they did what is called a metacontrast experiment. First, a disc was presented like this. And then it was followed by a ring. And they shared the same contour, OK? This is it, OK? So now what you could do-- you could vary the temporal interval between them. So if you present these two simultaneously, since they have overlapping contours, that's all you would see. But then if you present them sequentially-- ready? Like that, then the question is, to what degree did you see that first one? So let's look at that. And I want you to pay close attention to when I present this to you, OK? Are you ready? You're going to do it now so that they're only 60 milliseconds apart, OK? Here we go. You barely see the first one, right? You could just barely make it out. But it's not gone. Some people thought it would cause it to be gone. But that's not the case. So what you do then is the following. If you fixate on the red cross, I'm going to present the stimuli. And I want you to tell me which of the two stimuli I present is preceded by the disc. Ready? Could you tell? Which one, left or right? Left, good. Again, repeat. AUDIENCE: Right. PROFESSOR: Oh. Ready? AUDIENCE: Left. Left. PROFESSOR: OK, very good. So the fact is that when you present two side by side where one of them is preceded by the disk, you can tell which one came first. So somehow this didn't obliterate your ability to see it. Now, the other interesting fact is that if now you flicker back and forth between them, at equal rates, you see both equally. So therefore, what happened is you did not truly obliterate the percept. You just somehow weren't too much aware of it because of the temporal offset between the successive presentations. Now, you can do this slightly differently. Instead of having two stimuli that do not overlap but have adjacent contours, we can overlap them. So here I'm going to show you something and ask you what you see. Ready? What do you see? A big disk? OK? There it is. Now I show it to you. As to what I really showed you, was first I presented a dim small disc followed by a brighter higher contrast bigger disc. And that obliterated the perception of the first one. And that is truly obliterated. And the interesting experiment you can do in this regard-- you can do this interocularly. In other words, you can present one of the stimuli to the left eye and the other to the right eye. If you do that, you don't get a masking effect. But if you go back and do the metacontrast, you still get the same pseudo blockage without actually eliminating it, but sort of making you less aware of the first stimulus. OK, so now if one examines this, the way to do the experiment right is you vary the temporal offset between the two stimuli. And when this was done systematically with the brightness masking that I showed you later, secondly, what happens is that the effect rapidly declines. It's the biggest when the two are just right next to each other. By contrast, when you do the metacontrast, the biggest effect is when they're about 62 to 100 milliseconds separated, because that's when apparent motion is most effective. And then if you do an experiment to see where this happens for the brightness masking, here's what happens in the retina recording from a cell. This is a dim stimulus, a smaller response, and with a longer latency. And the mask alone will get a much shorter response and a much more vigorous one. And then when you systematically vary the temporal interval between the two of them, you can see when the interval becomes short. The second stimulus obliterates the first one because it's faster and it has a bigger response. So this explains what happens, because it happens in the retina by virtue of transmission. And the rule of the fact that the rate at which information is transmitted from the four receptors to the retinal ganglion cells is contrast dependent. The higher the contrast, the faster the response. All right, so now does anybody have any questions about the masking or the metacontrast that we talked about so far? All right, in that case, we are now going to move on and talk about another important motion perception effect which links one to eye movement generation. And that falls into the category of so-called optokinetic nystagmus. So how do you study optokinetic nystagmus at the very basic level? Well, the way you study it is, for example, in the olden days, you could take a large lamp shade like that and put vertical bars on int on the inside, have a person have his head inside. And then you rotate the lamp shape so that the bars go across like that. Or you can do that more simply nowadays. You just go put a bunch of vertical bars on a monitor and have them drift. And the person is asked to look at it. So when the person does that, what you get is the so-called optokinetic nystagmus. It has two phases. It has a fast and a slow phase. The fast phase is a saccade, a resetting saccade, if you will. So you watch an edge. You follow it. And then once it gets to the periphery, you make a saccade back and you watch the next one. And you keep doing it. And this shows you the three different rates. Now, this is quite an interesting mechanism. And it was determined that an important contribution to executing these kinds of eye movements, which is very important, because when you track anything, if you track a bird of something, you keep your eye on it. And then once it gets past you, make this a saccade back and take the next bird and track it. So that's the mechanism. And it's been shown that several neural mechanisms play a role in it. And one of those belongs to the so-called accessory optic system, which has those cells of Dogiel in the retina that connect to it. So we're going to look at that. Some beautiful work has been done about that. But before I do that, let me show you a very interesting factor. Here is a typical optokinetic nystagmus induced by a drifting vertical set of bars at various rates, as you can see here. And this is a rabbit. And in the rabbit, what you have is a left eye and a right eye. And you can activate them separately, as shown here. Now, a remarkable discovery that was made is that the accessory optic system-- that there's a primary role in generating this optokinetic nystagmus that is found initially in among the cells of Dogiel, of which in the rabbit, as I've said, there are about 7,000 out of 350,000 in the eye. And it was found-- which was incredibly surprising-- that this system originating with the cells of Dogiel, the cells of Dogiel themselves get input only from ON bipolar cells, OK? So what can we do to prove that this system here indeed is one that is activated by the cells of Dogiel, since the cells of Dogiel in the rabbit get input only from the ON bipolar cells? Anybody can think of an experiment? What did I tell you about the on and off systems and how they can be manipulated? Anybody remember? Now, see you've got to remember facts. And one of the facts was I told you about what happens when you use 2-amino-4-phosphonobutyric acid, APB. Anybody remember that? If not, you'd better study it, because there are going to be questions on that on the exam, of course. OK, so if you put APB into the eye, you block the ON bipolar cells. And therefore, if only ON bipolar cells feed into the cells of Dogiel, you would expect that if you block those bipolar cells, you should eliminate the optokinetic nystagmus in the eye that had been injected, if it's indeed true that the cells of Dogiel are involved, OK? So that's a beautiful experiment, because it's a definite one if indeed the data come out the way one had hoped. And guess what happens? Here are the data. The red ones are from the eye that had been injected with APB. You know, you blocked the ON cells. And hence, you blocked the responses of the cells of Dogiel, all right? So this is a rather nice, luckily definitive, experiment. Now, the curious thing is, if you did the same experiment in a monkey, you would get nothing. Here you do the same experiment in a monkey. Here is your OKN. And here is when you put in APB. And still after putting in APB, the monkey still does optokinetic nystagmus. That's because the rabbit is a very unusual species. And god only knows why it happened that only On bipolar cells fed into the cells of Dogiel. In the monkey, you get both ON and OFF cells feeding into the cells of Dogiel. All right, so now another interesting factor that I want to mention just briefly is that it's indeed is true that the feedback mechanism is essential for you to do optokinetic nystagmus right. Here is a monkey in which one eye has been immobilized and the other eye is normal. So if you look at the normal eye and you move a set of vertical bars very slowly, this is what your optokinetic nystagmus looks like-- just a very slow tracking. But then if you do the same thing in the mobilized eye, and you, of course, record the eye movement of the other eye which is blocked, what you get is a tremendously rapid increase in the pursuit movement. This can be called open loop. And this can be called closed loop. Now, the mechanism involved here is that your intent is to track. But when you realize-- or not you realize, but the brain realizes. You're not even aware of it-- that you are not following fast enough to keep up with the motion of the vertical bar, you order the eyes to move faster, faster, and faster, and faster and faster, and it practically runs away like crazy. And it still-- of course, because the eye is immobilized-- hasn't been able to catch up to the moving bar. So this further highlights the importance of this mechanism. And it has to be a closed loop system, obviously, for it to work properly. So there's a feedback telling you whether you had correctly tracked something. OK, so now I come to the last point today, which is now the mechanism that is involved in the motion analysis of accessory optic system. I'll talk about this only briefly, because this is going to be part of your paper. All right, now, they remarkable discovery that was made-- amazing discovery-- was that when people recorded from the cells of Dogiel, they found that these cells come in three different types with three different axes of orientation, these three axes. And when that was first found, people said, what on earth is going on? Why would we have three like that? I would have thought we'd have four, vertical and horizontal at least, or something. So this was really puzzling. But then they began to do some more experiments. And they made a truly remarkable discovery. And their discovery is shown here in a summary diagram of the accessory optic system. OK, here are these cells of Dogiel with the three axes of direction selectivity. They respond to rather slow motion velocities. So what happens here-- I'm not going to go into detail, because you're going to, as I say, report on this. But let me tell you about the essence of it. The cells of Dogiel project into the so-called terminal nuclei. And then they project up to the cerebellum. And from the cerebellum, there's a projection down to the vestibular nuclei. And then it was found, as it was known already, that the semicircular canals come in three orientations which are the same as those of the orientations of the cells of Dogiel. And so then they feed into the ocular motor system to move the eyes so as to keep the eyes in accurate contact with whatever you're intending to look at. So that is the very essence of the accessory optic system. And I just thought this will help you in putting your paper together. All right, so now I am ready to summarize for you. First of all, motion has been classified into several different types that include planar, circular, and radial. And one thing we talked about earlier that I mentioned here, but I didn't mention in this presentation-- when we talked about depth perception, we talked about motion parallax. That's a very important aspect of motion analysis. Then secondly, the majority of cells in V1 and most cells in MT are directionally selective, OK? And an addition which I mentioned earlier is that they also show velocity selectivity. Some V1 cells respond to different directions of movement for light and dark edges, like the S2 type cell I showed you. And some cells are sensitive to differential velocities of movement, which I didn't show. Then the accessory optic system that begins with the retinal ganglion cells of the cells of Dogiel form three axes of direction selectivity that correspond to the three axes of the semicircular canals. And they are involved in generating pursuit eye movements for image stabilization. Then one of the most important tasks of motion analysis is motion parallax, as it provides information about depth. Motion cues can provide important information for object recognition, often referred to as structure for motion. And stationary stimuli that flicker with various temporary signals induce apparent motion, of which I showed you a lot of examples. And then we talked about metacontrast masking, which is a phenomenon that occurs when two stimuli share contours. And when they're presented with an interval which corresponds to apparent motion, you tend not to see the first one. Lastly, brightness masking arises with overlapping stimuli appearing in rapid succession. And this does not occur with interocular presentation, proving that it's a record of phenomenon. And it has to do with the fact that the velocity with which a stimulus activates the retinal ganglion cells and hence, of course, the cortex, is much much slower with dim stimuli then with bright ones. And so bright stimulus, in a sense overtakes the responses to a dim and weak stimulus when the two successive presentations are very rapid. OK, so that's the end of it here, OK? And I'll show you one more motion effect. Ready? All right, so that is the story for today. And next time, you're going to have, as I said, a rapid summary of what we have covered in the first half of this course. And then the following Wednesday, we are going to have the multiple choice exam for the midterms. OK, thank you kindly.
MIT_904_Sensory_Systems_Fall_2013
1_Introduction_the_visual_system.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: This is our first introductory meeting of the course, which is 9.04. And we are going to cover vision and audition in this course, and there are going to be two of us lecturing. My name is Peter Schiller, and this is Chris Brown. And I will be talking about the vision portion, and Chris will be lecturing about the auditory portion. Now, what I'm going to do is I'm going to hand out the syllabi that we have, in this case, for the first half of the course. And that we are going to discuss in some detail today for the first half of the lecture, and Chris is going to discuss it for the second half. So that is the basic plan for today. And I will go through some of the basic procedures and issues that we may want to deal with at this very introductory portion. So first of all, let me talk about the reading assignments. If you have the handout, they are ready for you. If you look at the second page, that's where we have the assigned readings for the vision half of the course. Now, for that half of the course, the top eight assignments are all articles in various journals. We don't have a textbook for this portion of the course. And then in addition to the assigned readings, we have recommended readings that are listed there. And then another important factor that is listed there-- let me first say that the lectures will be put on Stellar, in most cases, after each lecture. And in addition, the videos that we are now recording will also become available, but they will not be available until well after each lecture. So I would advise each of you to come to the lectures rather than hoping to read the assigned material only or to eventually look at the videos. The reason I'm telling you this is that our analysis has shown that those students who attend the lectures regularly get much better grades on the exams than the students who do not. So I strongly will urge all of you to come to as many lectures as you possibly can. Now, the additional requirement that you're going to have for this course is to write two research reports, one for vision and one for audition. And the assigned written report that you need to put together is in a paper at the bottom of the second page. In this case, it's going to be a paper that was written quite some years ago, a very important and remarkable paper that has been published by Oster and Barlow, as you can see. And the task for you will be to not just report what they had reported, because that's repetitious, but to do a bit of research and write about what has been discovered since the remarkable findings that these two people had made at the time. All right. So that's the research report. And then I want to specify the exams. We are going to have a midterm exam, and the exact date for this has already been set on October 23. All right? But as I say, you can find this, and I will specify that in more detail in just a minute. And then we are going to have a final exam at the end of the term. The exact date for this, as always at MIT, will not have been set until probably sometime in November. So now let me also specify the grade breakdown. I think that's important for all of us. The written report for each half of the course-- there's going to be one report, as I've already said, for vision and one for audition-- and that will constitute 10% of the grade for each. And the midterm exam, this constitutes 25%. The final exam constitutes 55% of the overall grade. And in that, 15% will be on vision and 40% on audition. So if you add that up, you can see that vision and audition are set up to be exactly equally weighed for the exams. MICHELLE: Hi. I'm Michelle. I'll be helping the professors, especially with [INAUDIBLE]. PROFESSOR: So I'm Chris Brown, and I'm one of the instructors. I'll be teaching the second half. And my research is on two areas, brain stem auditory reflexes, like the startle reflex and the middle ear muscle reflex. And I also work on animal models of the auditory brain stem implant, which is a neural prosthesis that's used in deaf individuals. PROFESSOR: All right. And I'm Peter Schiller, and I work on the visual system. And I'm a professor here in our department. So that's very nice. Thank you for the introductions. And I hope, you guys, we all get to know each other. I'm very impressed that there's so many seniors here. That's actually unusual. I don't remember having this high a percentage of seniors in the class. That's really very nice, very nice. OK. So now we are going to talk, for the first part of today's lecture, about what aspects of visual processing we are trying to understand and, therefore, what we are going to try to cover in this course in terms of topics. OK? So first of all, what we are going to do for several lectures is to talk about the layout and organization of the visual system itself. Most of it we will discuss as it applies to higher mammals, in particular monkeys and primates and humans. Then we are going to talk about specific aspects of visual processing. We're going to try to understand how we adapt in vision, and, very interestingly, how we are able to perceive colors and process them accurately. Another fascinating topic is how we are capable of analyzing motion. That's a complex, very interesting topic, as is depth perception. And the reason depth perception is particularly interesting is because, as you know, the retinal surface is essentially a two-dimensional arrangement. And yet from whatever falls on these two dimensions in the left and right eyes, somehow the brain needs to convert to be able to see the third dimension. And as a result, several mechanisms have evolved to accomplish that, and we are going to discuss them. Then, again, another very complex topic is how we can recognize objects. Perhaps the most complex of those is our incredible ability to recognize faces. And that is highlighted, of course, by the fact that if you look at more simple organisms, like, I don't know, monkeys, they all look the same to you. But human beings, who are actually more similar to each other than perhaps monkeys are, we are really capable of telling them apart and readily recognize them over long periods of time. So it's a very interesting topic. And yet another topic that we will discuss is how we make eye movements. As you probably know, or you're aware of, that we are constantly moving your eye. You make saccadic eye movements about three times a second, thousands of times a day, hundreds of thousands of times, to be able to see things clearly in the world. So we are going to try to understand how that incredible ability has evolved and how it is realized by the brain. OK. So now to look at exactly how we are going to cover this, let me go through this. During the next lecture, which is September 9, we are going to look at the basic layout of the retina and the lateral geniculate system, as well as how the visual system in general is wired. Then on September 11, we're going to look at the visual cortex, then at the ON and OFF channels, so-called, that you'll realize what they are once we talk about it. And then there's another set of channels that originates in the retina, which are the midget and parasol channels. We'll discuss those, try to figure out why did they evolve and what is their role in being able to see the world in realistic fashion. Then we're going to talk about adaption and color, depth perception, form perception. And then we're going to have a lot of fun on October 2, and we're going to look at illusions and also visual prosthesis, because one of you, in particular, is interested in that topic. Then we are going to talk about the neural control of visually guided eye movements. That's going to consist of two sessions. And then we're going to talk about motion perception and another aspect of eye movements when we pursue something with smooth eye movements. And then we're going to have an overview. And then, lastly, on October 23rd, we are going to have the midterm exam. That's going to cover questions from all of these lectures. I should tell you right now that the midterm exam is going to consist of multiple-choice questions. So you're not going to, maybe, asked to write anything. You're going to have to just pick from each of the questions the correct answer. All right. So now what I would like to talk about next in a summary fashion are what we call the tools of the trade. What has happened over the many years that scientists tried to understand how the visual system and, for that matter the brain, works, what kinds of methods have been employed. And so I'm going to talk about each of these just very briefly this time, and then they will come up repeatedly during all of the lectures. Now, the first method I'm going to talk about is called psychophysics. I'm sure most of you know what that is. It's a scientific way to study behavior of humans and animals to determine how well they can see. Now, there are several procedures with this. I'm going to describe one that's used both in humans and monkeys. And what you can do nowadays, you can use a color monitor, and I will describe that in just a second. After that, I will talk about anatomy. I will talk about electrophysiology, pharmacology, brain lesions, imaging, and optogenetics. So now let's start with psychophysics in more detail. So here is a color monitor, and either monkeys or humans can be trained to first look at a fixation spot. And that's important because we want to always be able to present stimuli in selected locations of the visual field or selected locations along the retina. This is particularly important, because when you study the brain, different regions of the visual field representation are located in different areas, for example, in the visual cortex. So what you do then is you can present a single stimulus like this, and the task of the human or the monkey is to either make a saccadic eye movement to it, say that's where it is, or to press a lever that's in front of them. And then on each trial, it appears someplace else. You can present it in many different locations, and maybe one of those locations will be relevant to what part of the brain you are studying. And then what you do, you can systematically vary all kinds of aspects of the stimulus. You can vary the color. You can vary the contrast. You can vary the size. You can vary the shape. And by systematically varying this, you can create curves to describe exactly how well you can see any particular thing like, for example, just how much contrast you need to perceive something. All right. So that's called the detection task. Now, a related task, which has also been used extensively, is called the discrimination task. In this case, you present a fixation spot again. The person or the monkey fixates, and then you present a whole bunch of stimuli, one of which is different from the others. And you have to select where that one had appeared, the one that's different, by making an eye movement or pressing the appropriate lever. Now, when you do this, you systematically can vary the difference between the so-called distractor stimuli, which are all identical, and the target until the person is no longer able to tell the difference. And that way you can, again, generate a curve. And you can specify just what is the amount of difference that you need to put in this, say, how good are you at perceiving slightly different colors. All right? And by doing that systematically, you can generate these functions using these psychophysical procedures to determine pretty well how you're able to see. And now this particular approach is also very useful when it comes to studying individuals, humans in this case, who have some problems with vision. So if they have a problem in seeing colors, you can readily determine, well, what's the magnitude of that problem? And thereby it can tell you what procedures might be used to try to ameliorate their shortcoming. Now, another method that has been used extensively in not only vision, but in many, many different areas of studying the brain, including audition of course, is anatomy. Numerous methods have evolved in the course of anatomists working on these problems, and the first is a very simple one. You just look at the whole brain. And I'm showing this because this is a monkey brain, and you will encounter the monkey brain repeatedly. And it so happens that after people have studied this extensively, they were able to give names to just about every gyrus or every brain area and also relate it to what the function is of those areas. And so, for example, just to name a few of these, this is called the central sulcus. I need to do one more thing here. OK. So this here is the central sulcus. All right? Just for you to remember, humans also have a central sulcus, of course. And this is the lunate sulcus. And this region back here-- let's see, did I label this? Oh, here's a couple more, the arcuate and the principalis. You will encounter these repeatedly. And this region back here is the one which is the primary visual cortex in monkeys that has been extensively studied and has yielded remarkable discoveries about the way it works. So that is called area V1, or primary visual cortex. All right. So now just another example, if I can show you just a few examples of anatomy. Here's another example showing what the eye looks like. And the remarkable thing about the human eye, and the eye of monkeys as well and primates, is it has become highly specialized. There's a region here which is called the fovea. And in that region, you have a very, very dense distribution of photoreceptors and other cells in the retina. And because of that, you have very high acuity there. Now, because of that, the eye movements have to become effective for you to be able to see fine detail. So even, for example, when you read, what do you do when you read? You make saccadic eye movements across a line. Then you go down the next line three or four saccadic eye movements, and so on down the page. And you do that because you cannot make out the details of letters in the periphery because, there, the distribution of the photoreceptors and the cells in the retina in general become less and less dense. So that is a high degree of specialization that we will discuss in much more detail next time. Now then, all the fibers from the retinal ganglion cells course across the inner surface of the retina and go to the so-called optic nerve through which over a million fibers from the retina project into the nervous system. And how they project exactly, what that's like I will discuss in considerable detail the next time. Now, this area here is often also called the blind spot, and that you don't see even if you close one eye. But if you do a careful experiment-- I'll explain that the next time-- you can actually map out this little region where you don't see anything. They're in different locations in the two eyes, so the two blind spots do not overlap. And so, consequently, when you look with both eyes, you don't have a, quote, blind spot. So that's an example of what the human retina looks like, and this has been studied extensively using a whole array of anatomical procedures. Now, the third anatomical procedure I want to tell you is labeling individual cells. Now, the way this was done, or still is being done, is that you slice the brain into very, very thin sections. And you put them on a glass, and then you can look at them under a microscope. Now, here's an example of a cross-coronal section of the monkey lateral geniculate nucleus. That's one region in the brain to which the retinal ganglion cells project. And it's a beautifully layered structure, which I'll describe in detail the next time. And these little spots that you see here are actual cells, which are labeled using a so-called Nissl stain. Now, another method used in staining cells is the famous Golgi stain, which was discovered, invented, perhaps, you could say, by Golgi, for which he received the Nobel Prize in 1906. The remarkable quality of those productions is that this label-- it's a silver label-- stains not only the cell bodies, as the Nissl stain you have just seen, but also all the processes, the dendrites as well as the axons. So you see a whole cell as a result of that staining procedure. Yet another way to do this, which is more sophisticated nowadays, is to record intracellularly from a cell and then inject a label. This happens to be the so-called Procion Yellow labeling substance. You inject in the eye, and then you process the tissue again in thin layers. And this is an example of what that looks like. So this also stains all the processes of the cell. And the advantage here is that you can study this cell electrophysiologically and determine what it is like, and then stain it so you can establish the relationship what the cell does and what the cell looks like. All right. So now let's turn to the electrophysiological method, which is a consequence of this, logical consequence of it. Once again, here we have a monkey brain. And what you do here, we put microelectrodes into the brain. Now, this was a discovery that was made around the turn of the century, little bit after. Initially, microelectrodes were made from very thin tubes of glass, which were heated and then pulled so that the tip became smaller than a micron. So it was very, very small. Subsequently, other methods were developed. They etched fine pieces a wire until the tip was very, very small, and then they corded it. And they then we were able to put these electrodes into the brain and record single cells just like with glass pipettes. So the example here then is that you take a microelectrode, put it into the brain, and then that is connected to an amplifier system and a computer. And when you do that, you can record from a single cell. Now, as you well know, single cells generate action potentials. And that is shown here on an oscilloscope in a schematic fashion. And what some clever people did is that they decided that an easy way to process information about the manner single cells generated action potentials is to put this signal onto a loudspeaker system. And so every time a cell fired, what you would literally hear is beep, beep, like that. OK? And so if you shown a light on it several times, it will go like brrrp, like that. And the big advantage of this was that many cells see only a tiny portion of the world. And if you don't know where it is, you have to take some projector or something and move it around. And if the receptive field is here, you go, brrrp, brrrp, brrrp, brrrp, like that. OK? And so then you can map it out very accurately. Instead of having to like with the oscilloscope, you hear it, and that enables you to do all kinds of things, experiments, and you don't have to look at the oscilloscope. But you can perform all sorts of experiments, and you can hear the responses of the cell. Now, another method that is used extensively in electrophysiology is called electrical stimulation. It's a similar process. You take a microelectrode, for example, you put it in the brain, and then you pass electric current. Typically, that electric current is passed in such a way as to mimic action potentials. So if you are to listen to when it's activated, again, brrrp, you hear that. But this time, instead of the cell firing, you are firing the area there, and that then can elicit a response, all right, all kinds of responses. If you stimulate here, for example-- remember now, this is the visual system in the monkey-- as has been shown in humans, electrical stimulation elicits a percept, a small spot, a star-like image. In the auditory system when you stimulate, you can hear something. And if you stimulate in the areas that are related to moving your eyes, then the simulation causes a saccadic eye movement. So all those methods are very, very good in trying to understand better the organization of the brain for, in this case for vision, and for eye movements. Now, yet another method that is used-- several methods I should say-- is pharmacology. And when you do pharmacological experiments, one procedure is-- and many different procedures, I'll just describe one here-- once again, you can stick a glass pipette into the brain, and then you can inject it. You can inject it either with actually a syringe or using several other methods, and you can inject all kinds of agents. For example, you can inject a neurotransmitter analog or a neurotransmitter antagonist to determine what effects it has in various parts of the brain and, in our case, in the visual system or the ocular motor system. Now, yet another method that has been used is brain inactivation. Several procedures are available to inactivate the brain. Once again, here's a monkey brain. And this region here that I already told you a little bit about because of naming the gyri there, OK, is called the frontal eye fields. This region has something to do with moving your eyes. So what you can do if you want to study and find out just what does this area do, one procedure is to-- again, here is V1-- is to make a lesion. Now, sometimes you actually do that in a monkey. But sometimes in some experiments, humans may have an accident, some event when you served in Vietnam or something, and a region of the brain has been removed by virtue of a bullet or something. And that way you can find out what is the consequence of having a lesion like that. You can study, use a psychophysical procedure, as I described to you, to determine just what is the consequence of having lost this area. This is a huge region of research. A great many types of experiments have been done. One of the famous individuals who had done this kind of work is Hans-Lukas Teuber, who used to be the chairman of our department starting way, way, way back in 1962. And his specialty was to study Second World War veterans who had sustained various kinds of brain injuries. And the basis of that, studying them using psychophysical procedures, to assess what various areas in the brain, what kind of functions they have, what kinds of tasks do they perform. Now, yet another method, which has some major advantages, is to make reversible inactivations rather than permanent ones. And this you can do by, for example, using a method of cooling. There's a device, the so-called Peltier device, that you can put on top of the brain. And then electronically, you can cool the surface that is in touch with the brain, and then you can see what happens when you cool it. And then when you warm it up again, you can see what happens to recovery, which, in this case, in almost all cases like this, leads to full recovery and the same performance as prior to starting the cooling. Yes? AUDIENCE: Can you only use this method for surface structures or? PROFESSOR: No. They have now developed methods where you can actually lower them into the brain. And they're usually much finer, and very often they're sort of a loop type device. And you can lower that into the gyri or wherever you like, and, again, do the reversible cooling. Yes. That's a fairly recently developed device, and it works extremely well. Now, yet another approach is to inject substances into the brain that anesthetize, if you will, a particular region after the injection, but only for a limited time. And then you can see how the behavior is affected. Now, a variant of that that I referred to before, of course, is that you can use agents that are selective, that don't inactivate the whole area, but for example, only affect excitatory neurotransmitters or only affect inhibitory ones. So you can do kinds of selective things. And so you can establish the role of various neurotransmitters in the role they play in various brain areas. Now, yet another method very recently developed, which has been incredibly successful, is imaging. As you probably all know, you all know we do have an MRI facility here in our department on the ground floor, and some of you may even have been subjects in it. What that does is when you put a subject into this-- the variant of that is called fMRI, functional Magnetic Resonance Imaging. And so if you put a person in the magnet and you present a certain set of stimuli repeatedly, OK, or differential ones, whatever, the brain areas that are active performing the analysis that you ask them to do light up. So that method-- here's a complex picture of that. This is one that is in Freiberg, Germany. This is done with monkeys again. You have this device and you put the monkey in there. You lower the device on it, and then you can have the subject perform trained tasks. And then you can analyze the brain and look at the nature of activation. And here's an example of that, whatever this task is, doesn't matter. You can see that a particular brain region has been very heavily activated as a result of whatever manipulation they did. Now, there are lots and lots and lots of experiments of this sort, and we will talk about several of those as we look at various aspects of the visual system. Now, the last method I want to just mention very, very briefly is optogenetics. That's a new method, and we have actually several people in our department here who are using this method. Now, this particular method-- let me explain just very quickly what this is all about. Does everybody know what an opsin is? OK. How about if I say rhodopsin? OK. Most of you will know that. A rhodopsin is a set of molecules that are in the rods in the photoreceptors, and they are sensitive to light. Now, there are all kinds of variants. That's why I mentioned opsins rather than rhodopsins. And what can be done is that you can selectivity place these various substances into selected types of cells in the brain. And then, because these cells become light sensitive just like the photoreceptors are, then when you shine light onto the area where you had placed these opsins in cells, you can drive them. You can make them respond by turning the light on. Now, the amazing thing about that is that makes it a more powerful technique than electrical stimulation, that you can set this up in such a way that, for example, if you use a rhodopsin substance that is sensitive to red light, it will be excited by the red light. But if you have a slightly different substance that would be inhibited by blue light, then you can see what happens if you excite those cells, and then you inhibit those cells. So this gives you two sides to the coin, whereas electrical stimulation provides only one side. So that's a wonderful technique. And here's sort of a good example of that. Here we have injected-- I shouldn't say injected-- but genetically labeled cells with channelrhodopsin, so-called. And when that's done, when you shine in a blue light, OK, you excite the cell. And then instead, if you put in so-called halorhodopsin, OK, halorhodopsin, then if you use yellow light you inhibit the cells. So that's a remarkable technique. It's just at the very beginning of things, so we won't talk too much about this technique in studying the visual system yet. But I bet you that in another 10 years this is going to be a central topic. All right. So to summarize these techniques, other than the psychophysics, just remind you again, number one, we have electrical recording using microelectrodes. OK? Then secondly, we have electrical stimulation. Thirdly, we have injection of pharmacological agent. Then we have methods to inactivate regions, either permanently by lesions or reversibly by cooling or by injecting various substances. And lastly, we have optogenetics that enables you to activate cells or inhibit cells by shining light onto the brain. So these are quite a remarkable set of techniques. And individuals who want to become neuroscientists, they're going to have to learn to master not maybe all of these techniques, but certainly some of them so that they can carry out new and original experiments in determining how the brain works and, in our case, of course, how the visual system works. So that is, in essence then, what I wanted to cover. And now we are going to move on and have Chris tell you about his portion of the course, which will be taught during the second half of this semester. So as I said, next time we are going to start talking about, first of all, about the wiring of the visual system, the basic wiring. And then we're going to talk about the retina in detail and a little bit about the lateral geniculate nucleus. That's the next lecture. Please. AUDIENCE: So for the eigtht readings, the eight assigned readings that you have on here, how will we know when to read what? PROFESSOR: Well, that's a good question. The sections that have to do with eye movements that you can see, that you don't have to read until we get to the eye movements, the latter part. Initially, when we talk about the retina, for example, you definitely want to read the [INAUDIBLE] paper and the Schiller paper on parallel information processing channels. Then the ones that have to do with the Hermann grid illusion and visual prosthesis, that you don't have to cover until you come to the section on illusions and visual prosthesis. PROFESSOR: OK. Welcome, everybody. I'm Chris Brown. And I just, in the remaining time, wanted to give you a synopsis of what's going to happen during the second half of the term. So I'll be giving the lectures during the second half on the topic of audition, or hearing. And there's my email, [email protected]. So as you can see by my email, I'm associated with Harvard, in fact, Harvard Medical School. And I'm in the so-called ENT department at Harvard Med School, and that stands for Ear, Nose, and Throat. So some of you who are going to be going to medical school will certainly do an ENT rotation, where you learn about the various aspects of ENT. And much of it, of course, is the subject of otology, what happens when people have disorders of hearing, problems with their hearing. And in addition, many ENT doctors also operate on people who have head and neck cancers. So surgeries of those two types go on at my hospital, which is Massachusetts Eye and Ear Infirmary. And that's across the river in Boston, and it is, of course, one of the main teaching hospitals for ENT as well as ophthalmology. There's a big Ophthalmology department where the ophthalmologists deal with disorders of sight and vision. So I have an introductory reading, which is a book chapter that I wrote, and also with Joe Santos-Sacchi, which is actually now in the fourth edition of a textbook called Fundamental Neuroscience. And I believe this book chapter is on the course website now. And it summarizes pretty much what I'll cover during the semester in a reading that you could probably do in an hour or less, and it has many of the figures that I'll use. So if you're shopping around for a course and want to know what's going to happen in the second half here, you can look at that book chapter. There's a nice textbook that I'll also be assigning a number of readings from throughout the term, and it's called Auditory Neuroscience, Making Sense of Sound by Schnupp, Nelken, and King. And these fellows are, in the case of the first and the last, at Oxford University, and they work on psychophysics, that is, how we perceive hearing. And they test humans, and they also do a fair amount of animals psychophysics. And in the case of Israel Nelken, he's at Hebrew University in Israel. And he does a lot of electrophysiological recordings-- you heard about electrophysiology from Peter's talk just now-- and recordings especially from the auditory cortex. But this is a very nice book for coverage especially of the central auditory pathway in psychophysics. And it's pretty cheap. I think it's $30. And I believe I was told earlier that you can get it, as an MIT student, online free. And what's good about the online edition is there are lots of demonstrations, each indicated by the little icon in the margin of the text. And when you click on that demonstration, if you have your earbuds in you can hear what the sound demonstration is. And I'll be doing quite a lot of sound demonstrations through the course of the semester because I think it livens up the class a little bit, and this book is especially good for sound demos. So I encourage you at least to get the online edition of that textbook. So also on the course website is the syllabus for what the audition lectures will cover, and I just put a couple here to give you a flavor. In the first lecture on October 28th, we'll talk about the physical characteristics of sound and what happens to that sound when it strikes your external, and then your middle, and inner ears. And associated with each lecture is an original research article that, in this case, is Hofman, et al., and the title is "Relearning Sound Localization with New Ears." And so, in a nutshell, what they did in that research report was they took little pieces of clay, and they inserted them into their external ears or pinna, and therefore distorted quite a lot your pinnae. They couldn't do what van Gogh did, which was cut off the pinna, but they certainly distorted their pinnaes quite a lot on both sides. And then they tested, using themselves as subjects and other volunteers, how good their ability to localize sound was. And especially in terms of when the sound varied in elevation, they found that there were huge differences, that they couldn't localize sounds that were straight ahead versus sounds that were coming from above themselves with these distortions. But what was funny and what harks back to the title of the article is that when they had the volunteers go out and live their normal lives with these pinna distortions in for a few weeks, then they came back into the lab and tested them again, they found that they could now localize sounds in elevation with these new ears. They relearned how to localize sound. So this is a nice demonstration of learning or plasticity, at least in psychophysical responses. And it also emphasizes the function of your external ear, which helps you localize sounds in space. In the second lecture, we'll be talking about the receptor cells for hearing, which are called hair cells because they have these little appendages at their top that looked to the early neuroanatomists like hairs. And of course, sound is a mechanical energy that moves the fluid in which these hair cells are immersed and moves these little hairs or appendages at the top of the cell, and that's how the cell can respond then to sound. And so hair cells are the very important receptor cells for hearing, which are, of course, the analogs of the rods and cones in the visual system. And the research report associated with our talk about hair cells will be how a special protein called prestin is required for electromotility, which is a function of outer hair cells, one of the two types of hair cells, which allows them to actually move and flux and change their membrane links when they sense these mechanical disturbances by their stereocilia. So it's a pretty interesting paper in what we call a knockout animal. So the prestin is genetically knocked out in this particular animal, and the sense of hearing is then tested again in these knockout animals. So we have a whole bunch of lectures. I haven't indicated all them. They're on the course website. Toward the end of the semester, we'll have, as Peter indicated, a written assignment for audition. So I haven't actually thought it up yet, but let me just give you an example. Last year and this year, we'll be talking a lot about neural prostheses for audition. And the most famous neural prosthesis for audition, of course, is the cochlear implant, which I'll talk a little bit about later, and it works quite well. There's also a neural prosthesis that goes into the auditory brainstem. It's called the auditory brainstem implant, and it's used in some other types of individuals who are deaf. And it doesn't work anywhere near as well as the cochlear implant. It's sometimes called a lip reading assist device because people who have the brainstem implant usually can't understand speech over the telephone. They need to be facing you and looking at your lips as you're speaking. They need additional cues. And so there'll be a lot of discussion this term about the differences between these two types of the implant, where they're put in the auditory pathway, and why one works much better than the other. And that was the written assignment for last year, which was a discussion of why the cochlear implant works a lot better than the auditory brainstem implant. So I'll have something along those lines that uses the material from our course that you can take to answer a question. OK. Let me just go through in a half a dozen slides or so what I consider to be the high points of the auditory part of the course. About the first third of the auditory part of the course will be a discussion of the auditory periphery. And the periphery is usually divided into these three basic divisions, the external ear, which most of us think about as the ear, the pinna and the ear canal, which leads to the tympanic membrane here in yellow, or eardrum. And at that point begins the middle ear. The middle ear is an air-filled space. If you've been on a recent plane flight and your ears are a little bit stuffed up, it can be very uncomfortable, especially when the plane is coming down in altitude. And your eardrum bulges because the eardrum is just a very thin layer of skin, and it can bulge very easily. But it's painful when the eardrum bulges when the change in pressure happens as you're descending in a plane. In the air-filled space of the middle ear are three small bones, the malleus, the incus, and the stapes. If you remember from high school biology, the hammer, the anvil, and the stirrup. The stirrup looks like what a cowboy has on his saddle that he puts the cowboy boots through. It's a very, very small bone. In fact, it's the smallest bone in the body. These bones are very small because they have to vibrate in response to sound, and so they can't be big and massive. Massive things don't vibrate very well. And so the stapes is sometimes encompassed by bony growths around it and prevented from vibration in a disease called otosclerosis. And so at my hospital, the Massachusetts Eye and Ear Infirmary, they do an operation to cure that type of deafness called the stapedectomy. So what's an -ectomy? You medical types, what does that mean? AUDIENCE: Removal. PROFESSOR: It means removal, right. So they take the stapes out. And that's because if they just loosen it up, the bone regrows and re-adheres the stapes from vibration. So they replace the stapes, the natural stapes, with a stapes prosthesis, either a little piston or a tube, that they hook on with a wire to the incus, and they put into the so-called foot plate area or oval window of the cochlea, which is the next structure I'll talk about. And that very nicely restores the sense of hearing. In fact, when I was a postdoc fellow at Mass Eye and Ear, I could go watch the surgeries, and I watched a stapedectomy. And the patient was anesthetized, but not so much that she was really out. She was more sedated. And at the end of the operation, the surgeon was adjusting the artificial prosthesis, and the surgeon said, well, can you hear me? And the patient didn't respond. So he moved it around a little bit or adjusted the wire, I don't know which. He says, can you hear me now? And there was no response from the patient. And he did a little more manipulation and adjustment. And he finally said, can you hear me? And the patient said, why are you yelling at me? I can hear you just fine. So usually at the end of the operation, the patient has become more light. The anesthesiologist turns off the anesthesia. And they adjust the stapes prosthesis so it works well. So that type of operation is fairly common and very successful to restore the so-called types of conductive hearing loss. These bones conduct the acoustic sensation into the cochlea. Now, in the cochlea-- this is the structure here. The "cochlea" is the word for the inner ear. It comes from the Greek word "kokhlias," which means snail, and it is certainly a snail shell-shaped capsule. The cochlea looks like a coiled snail shell. And inside it are the receptor cells for hearing, the hair cells and the dendrites of the auditory nerve fibers. OK? And the cochlea is a bony-filled capsule filled with fluid and membranes and cells inside. And this anatomy is a little bit complex, so I brought in a model here of the auditory periphery. So we have the external ear, the long ear canal here, the eardrum. It's kind of slanted here. And this part here that I'll lift out here is the cochlea or the inner ear, the snail shell-shaped area. And leading from it is the yellow-colored, in this case, auditory nerve, which sends messages from the cochlea into the brain. And these funny, loop-shaped structures that I'm grasping are the semicircular canals, which mediate the sense of balance or angular acceleration. When you rotate your head, those hair cells in there are sensitive to those rotations. OK. So I'll pass this model around. You can take a closer look at it. And in our hospital, the surgeons practice on real, live specimens like that from postmortem material because in otologic surgery there's a lot of drilling to access, for example, the middle ear or the inner ear. And there's a lot of important structures that you don't want to run into with your drill bit, like the jugular bulb is that red thing there. The facial nerve goes right through the middle ear. And so the surgeons need to know their way around the middle ear so that they can avoid important structures and go to the right structure that they intend to operate on. OK. So you heard Dr. Schiller talk about electrophysiology and recordings from individual neurons. And a lot of what we know about how the inner ear works comes from such types of experiments. And this is an experiment at the top here that gives the responses in the form of action potentials. Each one of these blips is a little action potential, or impulse, or response from one single auditory nerve fiber recorded in the auditory nerve of a cat, which is the very common model for auditory neuroscience, or at least it was in years past. So this response area is a mapping of sound frequency. So this axis is tone frequency. And the frequency of a sound wave form is simply how many times it repeats back and forth per second. Frequencies that are common in human hearing are, for example, 1,000 hertz. This is a graph of kilohertz, tone frequency in kilohertz. And the upper limit of human hearing is approximately 20 kilohertz. The lower limit of human hearing is down around 50 or 100 hertz. In terms of smaller animals like the cat, they're shifted up in frequency of perhaps an octave, maybe a doubling of the frequencies that they're most sensitive to. This auditory nerve fiber responded to a variety of frequencies, except at the very lowest sound level. The y-axis is a graph of sound level. This is very low level or soft sound, this would be a medium sound, and this would be a very high level sound. At the lowest levels of sound, the auditory nerve fiber only gave a response to frequencies around 10 kilohertz, or 10,000 cycles per second. There are some spontaneous firings from the nerve fiber, and those can happen even if there's no sound presentation. These neurons can be spontaneously active. If you outline this response area, you can see that it's very sharply tuned to sound frequency. It only responds around 10 kilohertz. And this exquisitely sharp tuning of the auditory nerve is the way, perhaps, that the auditory nerve sends messages to the brain that there is only 10 kilohertz that the ears are hearing and not 9 kilohertz and not 11 kilohertz. OK? If you increase the sound level to higher levels, this auditory nerve fiber, like others, responds to a wide variety of sound frequencies, but it has a very sharp cut off at the high frequency edge. Maybe at 11 kilohertz, it responds, but 11.1 kilohertz there's no response. So the tuning becomes broader, but there's still a really nice, sharp, high-frequency cut off. So what good is this for? Well, the ear is very good at resolving frequency, saying there's 10 kilohertz but not 9 kilohertz. And that's very important for identification of sounds. For example, how do we know, if we're talking on the telephone or not seeing the subject who's talking to us, that it's a female speaker or a male speaker or an infant speaker? Well, male speakers have lower frequencies in their speech sounds. And so right away, if we hear a lot of low frequencies in the speech sounds, we assume we're talking to a male speaker. And that's, of course, a very important identification. How do we know we're hearing the vowel A and not the vowel E, ah or eh? Because of the different frequencies in the speech sounds for those two vowels. So frequency coding is a very important subject in the auditory pathway for identification and distinguishing different types of sounds. And one way we know what frequencies we're listening to is if the auditory nerve fiber's tuned to a particular frequency [INAUDIBLE] responding and not the others. Now, some very elegant studies have been done to look at the mapping of frequency along the spiral of the cochlea. And what those show is that way down at the base of the cochlea, the very highest frequencies are sensed by the hair cells and the auditory nerve fibers there. And as you go more and more apically, you arrive at first middle, and then lower frequencies. So there's a very nice, orderly mapping of frequency along the receptor epithelium, or along the cochlea in the sense of hearing. And so, obviously, the hearing organ is set up to distinguish frequencies and identify sounds. OK. So that's not the only code for sound frequency that we have. We'll talk extensively about another code that uses a time-coding sense. And this comes from the way that auditory nerve fibers so-called phase lock to the sound's waveform. Here's a sound stimulus, and here's auditory firing, first with no stimulus. It's a fairly random pattern. And here is with a stimulus turned on, and you can see that the spikes line up during a certain phase or part of the stimulus waveform. And that's not that impressive until you look at the time scale here. As I said before, sounds that are important in human hearing are as high in frequency as 1 kilohertz. So if this is going back and forth 1,000 times per second, then the scale bar here for one period would be 1 millisecond. And so the auditory nerve is keeping track and firing only on a certain or preferential phase of the stimulus waveform with the capability of milliseconds. OK? And this is a much better phase-locking pattern than you get in other senses. For example, in the visual system, when you flash the light on and off just even 100 flashes per second, everything sort of blears out, and you sort of don't have any phase locking the way you do in the auditory nerves firing. So this is a very nice coding for sound frequency that is sort of a secondary way to code. This is a very important coding for musical sounds. Musical sounds, for example, like an octave, 1 kilohertz and 2 kilohertz, a doubling of sound frequency, have very similar patterns in their temporal responses to those two frequencies that probably makes an octave a very beautiful musical interval to listen to. And it appears in music of many different types of cultures. So one of the demonstrations that I'll play for you is an A for 40 hertz and an octave above that, 880 hertz, and you'll hear how beautiful the two sound together. And then I'll mistune them, which is easy for me to do because I'm an amateur violinist, and I'll be doing this on a violin. And it's pretty easy to have a mistuned octave, and it sounds so awful and very dissonant when you listen to it. And one of the reasons for that, the reason for that, is the difference in phase locking for the two dissonant sounds versus the two consonant sounds. OK. So we will talk about what happens when you have problems with your hearing. One of the main problems with hearing that causes loss, complete loss of hearing, is when the receptor cells are attacked by various types of insults, diseases, drugs, the aging process, stimulation, or listening to very high-level sounds. These can all kill the hair cells. And in the mammalian cochlea, once the hair cells are lost they never grow back. And there's very active interest in trying to get hair cells to regenerate by using stem cells or various growth factors, but so far that can't be achieved. Luckily, in the auditory periphery, even if you lose the hair cells, which is the major cause of deafness, you retain your auditory nerve fibers. So these blue structures here are the auditory nerve fibers coming from the hair cells. And even if the hair cells are killed by these various insults, the auditory nerve fibers, or at least many of them, remain. So you heard Dr. Schiller talk about electrical stimulation. You can put an electrical stimulating electrode in the inner ear and stimulate those remaining auditory nerve fibers to fire impulses that are sent to the brain. And if you hook that system up right, you have a so-called cochlear implant. The cochlear implant has a microphone that detects sound. It has a processor that converts that sound into various pulses of electrical stimulating current, which can be applied to a system of electrodes that is inserted into the cochlea. The cochlea is a beautiful bony capsule. You can snake this electrode in. It doesn't move away. You can glue it in place. You can lead the electrode out to the processor that activates it when the subject hears a sound that's detected by the microphone. And this cochlear implant is the most successful neural prosthesis that's been developed. It's implanted all the time at Mass Eye and Ear Infirmary. It's paid for by insurance. These days, insurance pays for a cochlear implant in your left cochlea if you're deaf, and it will also pay for another cochlear implant in your right cochlea if you're deaf. So the metric for how successful this is, is whether the subject who's using the cochlear implant can understand speech. And so you can have various tests of speech. A speaker can give a sentence, and the person can respond. The speaker can give various simple words, and the cochlear implant user can respond. You can test these individuals in a very straightforward manner. And we will have a demonstration by a cochlear implant user. I have an undergraduate demonstrator who's here at MIT. And she'll come in and she'll describe and show you her cochlear implant, and you can ask her questions. And there, I guarantee you with this particular demonstration that I have in mind, that she won't always understand your questions. I have had really great cochlear implant users who are superstars, that understand every word. But the more norm is they understand much of what you say but not everything. And this particular room is a little bit challenging. There's some noise in the background. It's not one-on-one, so the implant user won't know exactly who's speaking at once. And in this case, the person just has one ear implanted, so her ability to localize sounds is compromised. And she won't know who's asking the question until she sort of zeroes in a little bit on it. So you'll see the cochlear implant is not perfect, but it's pretty good in its metric for speech comprehension. OK. Now, I said this particular cochlear implant user only has one implant in one ear. To really be good at localizing sound, we need to have two ears, and there are the so-called binaural cues for sound localization. Here's a subject listening to a sound source. The sound source is emitting sound here, and it gets to the subject's left ear first. And a short time later, it gets to the subject's right ear. And that's one of the cues for binaural localization of sound, which is interaural time difference. The velocity of sound in air is 342 meters per second. And depending on how big your head is, you can calculate the Interaural Time Difference. And of course, that ITD is maximal if the sound source is located off to one side, and it's exactly zero if the sound source is located straight ahead. OK? So that is a cue for localization of sounds. We'll listen to sounds that differ in interaural time difference. We can play these through headphones for you. And we'll have some demonstrations of that. The other binaural cue for sound localization is the interaural level difference. Here is the same sound source, same subject. This ear will experience a slightly higher sound level than the other ear because sound can't perfectly bend around the head. The head creates a sound shadow. And this is especially important for high frequencies of sound, like above 3 kilohertz. So that is a second binaural cue for sound localization. We'll listen to those cues. And we'll talk a lot about the brainstem processing of those binaural cues for sound localization. If you compare the visual and the auditory pathways-- this is a block diagram of the auditory pathway, and there are a lot of brainstem nuclei. We have the cochlear nucleus, the superior olivary complex, and the inferior colliculus. And these latter two get input from the two ears on the two sides, and they probably do the bulk of the neural processing for sound localization. You don't have to do that so much in the visual system because the visual system, in a sense, has a mapping of external space already on the retina. But remember, the inner ears map sound frequency. And so the inner ears themselves don't know or don't have a good cue for where the sound is localized in space. Instead, you need input from the two sides, and you need a lot of neural processing to help you determine where that sound source is coming from in space. OK. So we'll talk about the neural processing of those binaural cues. We'll talk toward the end of the course about the various auditory cortical fields. These are the cortical fields in the side of the cat's brain. So this is the front of the brain. This is looking at the left side of the brain. This is the rear of the brain where the occipital lobes are, where V1 is. And on the side or temporal cortex, you have the auditory fields, including A1 or primary auditory cortex. And we'll talk, at least touch upon a little bit, toward the end of the course the human auditory-- primary auditory cortical field is right here on the superior surface of the superior temporal gyrus. And just near it, it could be called an auditory association area, is an area called Wernicke's area that's very important in processing of language. And of course, connected with that is the Broca's area, which is another important language center, in the dominant hemisphere at least, of humans. So we'll touch upon that at the very end of the course. And we'll also have a special topic called bat echolocation, how bats use their auditory sense to navigate around the world at night even without their vision. And finally, at the very last class period before the review sessions, we'll all go over for a tour of the research lab where I work at the Massachusetts Eye and Ear Infirmary. So it's across the Longfellow Bridge right at the end there where the arrow is. And we'll have some demonstrations there. I think last year we had a demonstration on single-unit recording from an awake animal that's listening to sound. We had measurements from humans of the so-called otoacoustic emissions. These are sounds that can be detected in the ear canal with a very sensitive microphone. They're used in tests of hearing. And I also think we had a discussion of imaging of the auditory system. And of course, if you've ever listened to an MRI machine, it's sort of described sometimes as being as loud as being inside a washing machine. And it's very challenging for people that image subjects who are listening to especially very low-level sounds when there's all this background noise coming from the imaging machine. So there's some special things that are done to minimize the noise coming from the imager in auditory studies.
MIT_904_Sensory_Systems_Fall_2013
10_The_neural_control_of_visually_guided_eye_movements_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from 100 of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. So today then we are going to talk about eye movement control. This is a fact-laden set of topics. We're going to talk about the basics of eye movements, and then we're going to talk about the various neural structures that are involved in eye movement control. The interesting thing about studying eye movement control here, which is very closely, obviously related to vision. It's also very closely related to the ocular motor system because it involves moving the eyes. Now the basic fact is that we move our eyes practically endlessly every day. We make about three saccades a second. And we make more than 150,000 saccades every day. And it's not something you ever think about. Yet every time you look at something, you have to decide as to where you're going to look next. You're not even aware of it. It just happens automatically. It's a remarkable system, works with incredible rapidity, and it involves recognizing objects out there, recognizing the visual scenes, making a selection, moving your eye there, and you keep doing it, as I said, three times a second. So this is, obviously, the eye movements that you make like that, jumping from one location to another. And with each of those eye movements, as I've said, you need to decide where are you going to look next. So that's quite an amazing feat. And it's not just something that we do as humans, but it's also done regularly by animals. And most of the work I will tell you about has been conducted actually on monkeys whose eye movements are remarkably similar to those of our eye movements. Now let's first of all go over what I'm going to cover in the next two sessions. First of all, before I do that, I just thought I'll show you an amusing cartoon from The New Yorker, which was made in 2001 which sort of puts in a nutshell what the nature of all this is. Here's a cat. He's going to try to jump up there to get the thing. And so it has to make a decision where that thing is in space, has to make the calculations, as to how to generate the motor activities to jump up there. Now when it comes to the eye movements, you don't have to jump up there, but you have to decide how to jump your eye from one location to the next. So what we are going to try to understand is what are these calculations that we perform to be able to make accurate eye movements from one location to another. And what neural structures are involved in this, and what are the various rules these neural structures are involved in to generate those eye movements. Now here are the topics. First of all, we're going to look at just the basics of eye movements. Then we are going to look at the so-called eye plant and the brain stem nuclei, which are involved in moving your eye muscles. Then, we are going to look at an important structure in eye movement generation which is called the superior colliculus that we hadn't talked about yet. And then we are going to look at the visual input for saccade generation. We're going to examine what the various types of retinal ganglion cells do to be involved in eye movement generation. In particular, we are going to look at the midget and the parasol cells. Then perhaps a little bit today, but most of the next time, we can look at the cortical structures involved in movement control. Then we're going to look at the effects of paired electrical and visual stimulation because that would give us additional insights about the nature of eye movements. And then we're going to look at what happens when you have various deficits in motor control as a result of having lesions in various parts of the visual and ocular motor systems. And then we're going to look at some pharmacological studies to try to understand what the pharmacology is of eye movement control. And I should only say visually guided eye movement control. All right. So let's first start then with the basics of eye movements. We can ask a question, why do we move our eyes. Now you already have the answer to the first reason we do that, which is that in the retina, we have a highly specialized system in higher mammals and humans where there's only a very small area where you have a very high density of photoreceptors. And so to see fine detail, you need to move your eye to the location to be able to analyze it at a high level acuity. So that then involves the directive to acquire objects for central viewing. Because it's the central viewing that allows you to have high acuity. Now those are accomplished predominantly by saccadic eye movements, just like the ones I had mimicked at the very beginning. You make these very rapid eye movements fro one location to another. Another important part of eye movement control is that when either your are in motion, or whatever you're looking at, like a bird flying in the sky, to be able to analyze it, you track that object. You make what is called smooth pursuit eye movements. So that is another important mechanism that enables you to analyze things accurately in the world. Now yet another factor is that when we move about, it's important for us to be able for our eyes to be stable with respect to the world out there. And one of the mechanisms involved in that, as we shall see, is the accessory optic system-- and that's what you're going to be writing a paper about-- that is involved in controlling the eyes involving both visual stimuli and the vestibular system. So that is what we're going to talk about. Actually that part we're probably going to talk about when we-- in the next to last lecture, not next time, but the time after. We're going to talk about the accessory optic system a bit. All right. So now then, just to reiterate then, we're talking about the vestibular ocular reflex and the accessory optic system. So now then, people have classified eye movements into some very basic types. We have so-called conjugate eye movements when the two eyes move in unison, and then vergence eye movements of an object comes close to you, it goes away from you, then the two eyes converge or diverge. And that still has to be done to make sure that the images you see in the world fall upon corresponding regions of the two eyes. Now the conjugate eye movements fall into two basic categories. They're called saccadic eye movements, the ones we have talked about already a bit, and smooth pursuit eye movements that causes us to be able to track object so that we can keep that object on the fovea. Now vergence eye movement-- you go back to that picture I'd shown you before in which you have the horopter or the Vieth-Muller circle, if you remember. If any object is along the Vieth-Muller circle, like going from 1 to 2, say an object jumps from here to here, then when the eyes track it, they follow in corresponding points into two retinae. However, if there is an object that falls outside or inside the Vieth-Muller circle, as shown here, then the object, the second object follows an non-corresponding point. And so therefore you have to bring about eye movements that involved vergence-- either divergence or convergence. For the most part, we get, today especially and the next time as well, we're going to be talking about saccadic eye movements. And then as I said, after we have covered movement, we're going to look at the movements that involve pursuit and vergence. Let's now take a look at a human subject and see the eye movements the subject makes. I mean, you're perfectly aware of it, but I thought it would be fun for you to look at it. So here's a subject, and he's looking at something, right? And all these eye movements obvious there's nothing moving out there, so these are all saccadic eye movements. The head is fixed. And so now once you've seen this, there are a number of questions that comes into one's mind. The first one, on the not too scientific end, is well was that a male or female. Or secondly, what was a person looking at? And then thirdly, way down the list you say, but how does the brain do this. So let's see if you can answer these three questions. The third one will take the two lectures. But the first and second one can be answers fairly quickly. Was it a male or female? Anybody want commit themselves? AUDIENCE: Male, male, male. PROFESSOR: Male, male, male. Anybody think it was a female? You guys are good. It was a male. Very good. All right. Now the second thing, of course, is what on earth was this person looking at. Now there's no way you could glean that until I tell you. So let me tell you what this person is looking at. And what this person looking at is up picture which is created by Rene Magritte. And actually resides in at the National Gallery of Art in Washington DC. That's quite an interesting painting, and that's what I'm showing it to you. And it relates closely to perception-- the curious aspects of it. And you can see this doesn't make sense quite right. But it's because of that, because of the puzzling nature of this picture that it is in the fabulous museum. Because it is so unusual and different that it makes you think. So anyway, what happens then suppose you started here when picture came on, then you're going to have to make a decision where you're going to look next. So what would you look at next? Most, commonly, would look at the eyes Of the rider. Then you would look at the eyes of the horse. And so I look at various outstanding features. Now the way this looks, more or less-- looks like in real time, looks like this. So that's what this person was looking at. And that makes you aware of the fact that whenever you look at the pictures, you look at anything, you make all these eye movements in this very short time for you to comprehend the scene and to understand it in some detail. When this kind of eye movement series is made, you wonder is this something that animals also do. And the answer is yes. Monkeys certainly do. Most mammals do, and even birds do this kind of stuff. So what I want to show you next is, first of all, what the pattern of eye movements is in humans and the initial person who did beautiful work on this who's name is Yarbus. Now this is a famous sculpture. This sculpture the so-called Bust of Nefertiti. And what Yarbus did is he looked at the kinds of eye movements a person makes when they look at this figure to be to comprehend it. And you can see what is interesting about this is there is a lot of saccades along the edges, and there's a lot of detail, and not too many saccades made to regions where there's smooth areas. So what Yarbus did, he actually published a book on this analyzing the nature of our eye movements just looking at its behavior like this. Now if you do the same thing with monkeys, they are going to show you that monkey, a bunch of monkeys, they also move their eyes just like we do. They also have foveas, of course, as we discussed. And so as they move around and look around in the world, they make even more eye movements than we do. They're a little bit more quick about it, and so they make many, many eye movements and make about maybe almost four saccades a second. Now to see whether they also actually select objects in a sensible way, what you can do is you can present a bunch of displays. Here is a bunch of round ones-- in round locations, I should say-- different objects. And here we have a bunch of them arranged in a rectangular fashion. Then on the right, with the [INAUDIBLE] fixed again, you see the kinds of eye movements the monkey makes. And what you can readily do, even if you didn't see these here, you can tell that there must be a whole bunch of objects at these locations, including one in the center, and here, that the objects must be-- are aligned the way they are there. So clearly the monkeys do the same thing we do. They tend to look at particular objects in the scene, and make accurate saccades to them to be able to analyze them. So because of that, it is natural to take monkeys and to study their eye movements and to study the underlying neural mechanisms in these animals, which would then make us more capable of understanding just how the brain moves your eyes about. So now, let us becoming more concerned with the neural mechanisms and the machinery involved in eye movement generation. And so we are going to refer to this as eye plant and the brain stem nuclei, which are in the mid and lower portions of the brain, not in the cortex. We'll talk about the cortical factors later on. So let's start first of all, with the muscles. Each eye have six extraocular muscles, as they are delineated here. They're called the superior and inferior recti, the medial and lateral recti, and the obliques, the superior and inferior oblique muscles. Now the remarkable feature about these muscles that renders them different from the muscles that you have in the rest of your body is that the fibers of each muscle run the entire length of the muscle itself. It's not segmented. Most muscles in the body are segmented. Because of that, they're often difficult to understand the nature of the operation. But here we have a situation that since all the fibers on the entire length, it is a relatively easy to comprehend the basic mechanisms that generate the eye movements at this low level. The next thing that's important to know is that the eye is a balanced structure. It doesn't have any weights anywhere. And so when you move the eyes, it's not like when you have to pick up an object. If you were to pick up a heavy object and you thought it was a feather then you would practically hit yourself in the face. But if you know what the object is, you can correct for the way you're going to lift it, which adds another huge dimension on how you move your body about, and how you operate your muscles. Luckily when it comes to the eyes, that is not a factor. And because of that, it's easier to understand the way it operates. So now that you have this, the next question one can ask is how are these muscles innervated. I mean obviously, you have to have some nerves that connect to it that's cause the muscles to contract or to let go. The way that works is that the nerves that connect to the muscles of the eye release acetylcholine that causes the muscle to contract. OK so that's a basic-- that happens everywhere in the body. So that's very basic. You already know all that. But what you probably don't know yet is what are the nuerons-- where do they come from, in other words, to innervate these muscles. Well it turns out that there are three cranial nerves-- how many cranial nerves are there in the brain? AUDIENCE: 12. 12? PROFESSOR: 12. 3 of those, believe it or not, are involved in the control of eye movement. And so we can designate that. We can tell you that the lateral rectus is innervated by the abducens nerve. The abducens nerve comes from the abducens nucleus. The superior oblique muscle is innervated by trochlear nerve, which is the fourth cranial nerve. And the rest of them are innervated by sub-nuclei of the third nerve, which is called the oculomotor nerve. So we have these three oculomotor is a third, trochlear is the fourth, and abducens is the sixth. So it's these three nuclei which-- note again that the oculomotor has sub-nuclei-- that innervate these muscles. So now that we know that, the next question is how do these neurons from these three nuclei act to do this. But before I tell you that, I want to make sure that you learn what the 12 cranial nuclei are. What you can do here-- and I'm using a so-called clean mnemonic device. On old Olympus towering tops a fat armed girl vends snowy hops. That's easy enough to remember. The first letter of each designates each of the 12 cranial nerves. So if we then look at that, here they are. The first one is the olfactory one, the second is the optic. Then we have the ones in blue here, which innervate the eye muscles-- oculomotor, trochlear, abducens. And then one important ones that you're going to hear a lot about is the auditory one-- that's number eight. So those are 12 cranial nerves. And then, you don't have to remember this, it might help to know that, of course, you have a whole bunch of spinal nerves as well. You have 31 of them. And those 31, of course, each doubles-- one on the left side, one on the right side. You have 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal. Now you don't-- as I say, you don't have to remember this. And I'm not even tell you have to remember these. But it's good to know that. It's something that you can commit to memory, so that when you go to a party and talk to somebody, you can say, hey, you know about the 12 cranial nerves. And they go, [YAWNS], see you later. All right. So anyway, we can move on now and see how these neurons that innervate the muscles, that are so often referred to as a final common path-- how they operate. Actually that underwent quite a bit of debate for a long, long time. We're not going to go into the debate itself, but I'm just going to tell you how it works. But the fact is that these neurons are involved in generating all sorts of eye-- all the eye movements we talk about, meaning smooth pursuit, meaning maintained eye position, and saccade. And so here's an example. This show vertical eye movements, just vertical eye movements. And what you can see here is that the more they eye looks down, like here, the more sustained activity to keep the muscle contracted, to keep the eye down. Now but you also have to remember that while this is happening here, the opposite muscle on top, the superior rectus, has to let go. So there would be no activity to contract the muscle in the superior rectus-- no muscle fibers in the superior rectus. Secondly, the important thing to notice here is that whenever there's a saccade-- saccade, saccade, saccade, saccade, and so on, when there's a little saccade, there's a high frequency burst. And the high frequency is short. When you have a big saccade, there's a much longer high frequency burst. And when there's a saccade in the opposite direction, there is a pause. So what do you think happens when there's a pause? In that case, the superior rectus would get a burst and would contract rapidly to get an upward eye movement. So this muscle will cause your downward eye movement, and the superior rectus records an upward movement, of course. So that is the basic layout. Now this is schematic, but to show you that this is real, I'm going to show you some data here of a monkey doing the actual eye movements, recording from the ocular motor nucleus. And in this case again, invading the inferior rectus. And what you can see here are a bunch of saccades. And here the monkey is tracking an apple as it moved down. And what you can see-- let's look first at this at the bottom. What you can see here is that somewhere along the line here is what the monkey's tracking. This particular neuron begins to cut in, and then gradually it increases the rate of its activity as a result of which, you get smooth pursuit eye movements. Now if you look at the upper part, once again as I've shown you, the frequency of activity is proportional to the angular deviation of the eye, this case because you're talking about the inferior rectus, in the vertical dimension. And whenever there's a saccade downward, there's a high frequency burst. And whenever there's a saccade upward, there's a pause. So that is the basic nature of this. And what is so lovely about this, is it's like a machine. This is so lovely that if you then collect data from several cells, and here's an example of it, and you look at the number of spikes per second and the angular deviation of the eye, it shows you that each of these four neurons acts in a linear fashion. As the eye deviates more and more, the frequency, the activity, gradually increases. This is not saccades, obviously. This is maintained activity-- let me go back again-- like here, like here, like here, and like here. So you get this beautiful linear function, which again as I've said, makes it relatively easy to understand quantitatively the process of moving the eyes. So that is the basic mechanism that you see in the oculomotor nuclei. And the important thing to understand then is that the activity of these neurons is involved both in the making saccades and in maintaining various eye positions. The reason I emphasize this is because maybe about 30 years ago, there was a big debate. Some people argued, and they initially began to record from these oculomotor nuclei that we have two different sets of neurons in the final common path, one of which controls saccades, and one of which controls maintained position, or smooth pursuit. But that's not the case. The case is that these neurons do both. Now why did this confusion arise? Well, it arose because it was found that if you looked at the so-called supranuclear complex, which are neurons which are sort of above, if you will, the oculomotor, trochlear, and abducens nuclei, there are a whole bunch of subnuclei there-- we're not going to go into details about them. You don't have to know that. But at any rate, in those nuclei there are neurons that are indeed separately coding different attributes. I'm going to show this, but before I do I want to make one more important point. If you take a microelectrode, and you put it in one of these nuclei that controls a final common path, if you stimulate there at a high frequency, in this case 500 Hertz, if you increase the duration of the stimulation, you get progressively bigger saccades, Which proves, with stimulation that it's indeed the duration of the high frequency burst that defines the size of the saccade that you elicit. So now we can move on. And in this case as I say, you don't have to know this, but I want to just mention it briefly. We have a [INAUDIBLE] the eye-- the eye muscle. And then we have, in addition to the final common path, we have a supranuclear complex, which have various neuron, some which pause, in other words, that don't cause saccades, and some which fire only to saccades. I think that's the only ones I need to point out. But at that level, they're separate. But once they come together in the oculomotor complex, then they impinge on these neurons of the final common path that carry both of these signals. So that then is the essence of the oculomotor complex that I want to cover. And so we can a quick, summary diagram here. We're going to make a diagram that's going to grow over in this session and the next one, and that's going to be very complex in the end, unfortunately. But here we have the brain stem oculomotor complex. And the way to put this-- because you talk about what kinds of coding operations are involved-- this carries what you're going to call a rate code. The higher the rate, the greater the angular deviation of the eye. And of course, the longer the high frequency burst, the bigger the saccade. So that is the so-called rate code. So this is the basic layout then. And of course, if there's a lot of activity here, this muscle is going to contract, the eye is going to deviate downward. At the same time, you're going to get a signal to the upper part that's going to let go. In other words, the signal is going to be terminated so that you have no activity in the upper part, so that that muscle can expand. So that's basically it at this point. And as I say, this will grow, and grow, and grow, and grow, until by the end of next time, it's going to be really complicated. But that's how the brain works. It's complicated. So with that then, we are next going to move to the so-called superior colliculus. Now the first thing I want to say about this structure is that this region of the brain-- I'll show you a picture of it in just a minute-- is one that has undergone tremendous changes in the course of evolution. In more primitive animals, like in fish and in amphibia, what you find is that this structure is the prime structure for analyzing vision. So it does vision and then generates commands of various sorts, including commands to make eye movements, or generate other kinds of movements in different animals. And this is the case because in these animals, there is very little cortex. But with the process of encephalization more and more of the analytical undertakings the brain was involved in had been relegated to the cortex. And that also meant that more and more of the way we analyze vision became more central, as we have discussed already, with cortical mechanisms. Now to make this clear, let me show you three drawings of a so-called toad, a rabbit, and a monkey. And what you see here, the toad-- here is the superior colliculus. This structure is quite large relative to the rest of the brain. Then when you come to the rabbit, there's some degree of encephalization. So this structure is much smaller relative to the rest of the brain. And then when it comes to the monkey, it looks like the superior colliculus is actually quite tiny. And in monkeys and humans, the superior colliculus is very small. It's maybe about five or six millimeters in diameter. And yet, it's an extremely important structure and we're going to examine just what the structure does. And later on, we're going to examine what happens if you lose that structure. So now here is a sagittal midline section of the monkey. So you can see this real. And here we have the lunate again. This is what part of the brain here? This is a cephalic area. AUDIENCE: [INAUDIBLE]. PROFESSOR: OK, good. This is area V1 here. Then if you look down here, you see this little bubble looking thing here? That's the superior colliculus. So that is a structure that we are going to look at in some detail. Now if one enlarges this-- this in a cat actually-- and one takes a coronal cross section of the superior colliculus, they can see it's a fairly complicated structure. It has several layers. People have distinguished at least seven clear layers. And you can see the upper part here is often referred to as superficial gray. There are two layers of that. The very top layer gets an input directly from the retina. The one little further down, I'll elaborate on that in the minute. It gets input from the visual cortex. And then we have a whole bunch of other layers that we are going to examine in some detail in just a minute. So that's the nature of the structure-- the superior colliculus. And the fact is that in more primitive animals, actually, it is a structure with many more layers. Some of these more primitive animals have as many as 14 layers in this structure because it is heavily involved in eye movement control, as well as in the analysis of vision. Now the interesting fact about the way this is laid out in the cat, and in higher animals, and in humans, that most of the cells in the various layers reside in those layers. They don't talk to each other that much. But they get a lot of input from many other structures that control those layers. And I'll go into that in some detail in a minute. Now the next important fact about the superior colliculus, which makes it similar to lateral geniculate nucleus in the cortex, is that the visual field is laid out in a topographic manner in the superior colliculus. This is a top view of the colliculus. This is the visual field. And once again, the rule, if you remember, is that the contralateral half of the visual field is it projects to the superior colliculus. And of course, the opposite then is the case for the other side. And then if you look at the nature of the connections, you find that everything that's close to the fovea, the foveal representation, projects to the anterior level of the superior colliculus. So this is sort of the foveal region. Then the upper visual field projects to the medial portion of the colliculus. The horizontal meridian area projects to the horizontal meridian of the colliculus in the back. And the lower part of the visual field projects to the lateral portion of the colliculus. Now why do I emphasize this? You will see in just a minute why that is important. It's very important for us understand that there's a lovely topographic layout of the visual field on to the superior colliculus. So now we are going to move on and examine the question of what is the nature of the responses of neurons in the superior colliculus. Now I can tell you right off that in the monkey colliculus, in the cat colliculus, the nature of the responses of these neurons is not particularly interesting. They're small receptive fields, and they tend to respond both to the onset and the termination of a visual stimulus, meaning that they seem to get an input from both on and off. And then if you make the a stimulus progressively larger. You get an increase in responses, but then once you make it quite large, in this case 10 degrees and 20 degrees, the response greatly declines, which says that just like in the retina, these neurons have centers around antagonism. And they respond vigorously to small spots and very little to large spots. Now that's a basic layout. But then if you ask the question, well, are these cells in the colliculus orientation and direction specific? Are these cells color selective? For the most part, that answer is no, not entirely, but mostly no, meaning that these cells are not that really interesting. They are very, very, very basic. So that is the nature of the responses of these neurons in the superior colliculus in the superficial layers. I've told you that the superficial gray, which consists of two layers, the one that that's a direct input from the retina, and the other that gets an input from the cortex, gives you vigorous visual responses. But now, when you get into the deeper layers of the colliculus, you find something very exciting an extremely interesting. And that is sort of diagrammed here. What you have is a bunch of eye movements. And then look at this cell. This cell responds-- it's the same cell throughout-- responds vigorously to a small saccade. And this responds well before the saccade is made. That's what happens here. But then if the monkey makes bigger saccades, and some even in the same direction, like these here, you don't get a response. So what does that mean? That's curious. So what that means is that these cells, some have something to do with eye movements. This fires vigorously, and then eye movement ensues-- fires vigorously and eye movement ensues. Now there's an eye movement that ensues here as well, but there's no saccade. So how do we explain this? Well, what you're do in these kinds of experiments then is that you collect extensive data to see when a cell like this fires, and when it doesn't fire. And you can generate a response curve, if you will, or a response diagram to see when it fires and when it doesn't fire. So now to look at that, here's an example of that. In this case, all the white spots are saccadic eye movements generated from a central point that were not preceded by eye move-- saccadic-- by neural responses. And the red ones show the times when the neurons you're recording from, the single cell, responded vigorously. So that meant that whatever mark you made a saccade in this direction, there was vigorous activity in this neuron, meaning the neuron had an important role in generating an eye movement, whereas all the other saccades did not generate a responds in this particular neuron. But to make this clear, there are many different neurons in many different parts of the colliculus. And so when you generated other eye movements, other neurons fired. So how can we analyze this in more detail? Well, the way we can-- let me just add one more point here. This green circle here is what we're going to call the motor field of this particular neuron. So that's a new concept for you-- motor field. These neurons in the intermediate and deep layers of the colliculus have motor field that define the size of the saccade. Now let me say one more thing in anticipation-- that if you then study where the cells' visual receptive field is, it's right there relative to the central fixation point. So it means that this particular cell fires when a saccade is made into the receptive field of that neuron. So how can we verify this interesting, clever arrangement in the superior colliculus? Well, the way we can do this is to instead of recording, or actually I shouldn't say instead, really. But we can do two things. We can record and see what happens in different parts of the colliculus. Or we can electrically stimulate there. Well, so here is a schematic. Here is a colliculus-- this is anterior, this is posterior. So this, obviously, if you remember, this is close to the fovea. And here is medial and here is lateral. And what we can do then, we can see where the receptive fields are of these neurons. And we can plot them out. Here are the receptive fields. Here is number 1, close to the fovea, number 2 is medial is up, and lateral is down. Now that one has done this, you can switch over and you can electrically stimulate. And look what happens. If you electrically stimulate at 1, you get a saccade that moves the eye into the receptive field of that neuron-- that set of neurons, I should say. Then if you stimulate in 2, you get saccades that move it to 2. And when you simulate in 3, it moves to location 3. Now what more do we need to verify given this situation? How can one believe this hypothesis, if you will, at this stage? What did I tell you about when you electrically stimulate the neurons in the final common paths, back in abducens, in the oculomotor nucleus, for example? What did I tell you? That this duration of the simulation define the size of the saccade elicited, right? Now if that's the case, if that's were the case, then this would not mean a damn thing, would it? Because you could take that same location and stimulated longer, and you would get a bigger saccade. So what you need to do? You need to systematically vary the duration of the high frequency burst in the colliculus to compare it with what happens when you stimulate in the abducens or oculomotor nuclei. OK so let's look at that. Here is a picture I've shown you before. As we increase the duration of the high frequency burst, you get progressively biggest saccades. So now we do the same thing in the colliculus. And lo and behold, you get something totally different because you have a totally different code in the colliculus. What you get here is that you get after you exceed a very short time-- just 10 milliseconds-- when you don't get a saccade. From 25 to about 120 milliseconds, you get saccades, but they're all the same size, so meaning it's where you stimulate in the colliculus, not how long you stimulate. And then to prove this even further, when you make the stimulation much longer than that, you get a staircase of saccades, each reaches the same size. Now yet another proof here is that if you stimulate an anterior tip of the colliculus, remember where the receptive fields are very close to the foveola, then they get a hold array of, I think there's something like 14 saccadic eye movements-- bang, bang, bang bang, bang, bang-- because the simulation goes on for a long time. So each little saccade is the same size, you just get a staircase of them. So that proves that the coding operation in the colliculus is one that defines the location of a receptive field, and when you stimulate it, it generates a saccade that moves the eye into that receptive field. So that then in a nutshell tells you what the basic principle is off the functioning of the colliculus when it comes to eye movement. Namely it says, a saccade is generated by computing the size and direction of the saccadic vector needed to null the retinal error between the present and intended eye positions. That's just putting it in a slightly different way, saying that, OK, I decide I'm going to make a saccade to here-- bang, like that. Just a saccade line. And but you did that you had an intended location. And by making an eye movement, you know the error. Now the colliculus is really quite accurate. You can make saccades with very small error. Usually the error is roughly, at most around 10%, meaning that when you make very small saccades, is almost negligible. When you make a large saccade, then maybe there is an error. And when that happens, sometimes what you have to do is to make what is called the corrective saccade. So suppose you're looking here and an object appears here. And so you say, I intend to go here, and the colliculus fires in that region of the colliculus to generate an eye movement. And say the eye movement goes to here. Then what you have to do is to make a second saccade, like that, to correct for that little error. So sometimes when you make large eye movements, indeed, you make a secondary saccade for the correction because your accuracy is roughly around 90% correct. Does everybody so far follow the basic principle? It's very important to understand the basic principle of the operational characteristics of the superior colliculus. So again to repeat, a saccade is generated by computing the size and direction of the saccadic vector needed to know the retinal error between the present and intended eye position. And what you have in the colliculus is a cell that is being activated because a target falls into it. And that activation then generates a signal to the lower parts of the colliculus-- we'll talk about that, how that is done-- which then results in a saccadic eye movement that nulls that error signal. Very clever, very basic, very beautiful-- and almost computer-like at this level. So what we can do now, we are adding the superior colliculus, the very basics of it so far, to this diagram. And that says that the colliculus then sends a signal to the brain stem. Some of it initially to the supranuclear complex, and then from there it sends a signal to the abducens, or oculomotor, or trochlear nuclei to generate the appropriate eye movements. So that then is the very basics of the operational characteristics of the superior colliculus. Now what we're going to do next is we are going to examine what the nature of the visual input is, predominantly the superior colliculus-- not entirely, but heavily-- to generate a saccadic eye movement. So if you remember, I told you that in the retina, we make a major distinction between two classes of retinal ganglion cells-- the midget and the parasol. The midget project to the parvocellular layers of the colliculus. And the parasol cells project to the magnocellular layers. However, it's also been found that-- I mentioned that only briefly, that in the retina there is yet another class of cells it is called-- some people have called it the W cells in the cat. Other people have called it the corneal cellular cells that reside in the relatively small numbers compared to the other two throughout the retina, and project, in part, to interlaminar layers of the lateral geniculate nucleus. But it's been found that they also project heavily to the superior colliculus. Now how do we know this? Well let me tell you, it's important always is to understand the experimental procedures. So let me skip that here. Just to remind you again are the connections. This I've shown you several times before. So it says here the interlaminar layers project to the upper portions of the visual cortex. But I didn't tell you before that these corneal cellular cells also project to the superior colliculus. So here's the superior colliculus. And here, initially, we can ask the question, what projects look to the colliculus. So how do we find out? Suppose you are in a laboratory where the big question is what kinds of cells project from the retina to the superior colliculus. Think about this for a minute. What kind of experiment would you do? How would you find out? Well, that's quite an interesting question. And so to understand that, I'm going to tell you about a couple of techniques. One technique would be that you could inject-- anatomical technique-- you would inject a substance in the colliculus that would be then retrogradely transported to the retina, and would light up the cells there. That's would be one approach. Another approach is that you could actually record from the retina itself with microelectrodes. And you could electrically stimulate the colliculus. That way we would be backdriving cells. And when you find the cell in the retina that is backdriven from the colliculus, you know that particular cell does project to the colliculus. So that's a very strong technique because that enables you then to identify the cell type and also you could, if you do intercellular recording, to label it. So they can do the same thing also-- because I'm going to talk about that as well-- is to see what happens in terms of inputs to the colliculus from the cortex. So let's talk about each of those, and that'll give us a better sense of what the projections are from the retina and from the cortex to the colliculus. So here's an example of how you do it again. So we are recording from the retina here, or in a different experiment, you can record from the cortex. You can then electrically stimulate here. You can backdrive the cells in the retina, or we can backdrive the cells in the visual cortex. And then once we do so, and record it from a single cell, you can study to see what it is like. So that is the approach. And if you do that, you come up with some very nice answers. First of all, these cells are W-like, which means that they're like the corneal cellular cells that we have seen the retina. These are these small cells that are not particularly rapidly conducting, obviously and are not color selective. And then if we look at the cells up here, we find that all those cells are complex cells, and they all reside in layer five. That's already known anatomically that the cells in layer five are the ones that project-- many of them project to the superior colliculus. So that then is the basic procedure. So now we can expand on this basic procedure. What we can do here is we can record from the superior colliculus. And then we can eliminate the inputs from the cortex by cooling it. That's one approach. So if you do that, you record from a cell here, you drive it by electric stimulation-- by visual stimulation. We identify that cell in V1 by backdriving it. And then once you cool it, cool the visual cortex, you see what happens to that cell in the various layers of the superior colliculus. Well when you do that, you get a very dramatic and, luckily, clear cut result. The result is the following. If you look at the superficial layers of the colliculus, and look at what happens before you cool, when you cool, and after you cool, the cells keep firing. So in these upper layers in superficial gray, those cells keep firing. They're not interested in anything that comes down from the cortex. That's because the inputs from the retina goes to these upper layers of the superior colliculus, and goes predominantly to the superficial gray. However, once you get down to the intermediate layers, and the lower layers as well, every time you cool, the cell stops firing, as you can see here. Which means that these cells in the intermediate and lower layers of the colliculus are driven by the cortical down flow. They're not driven by the direct input from the retina. So now that we have seen this, one can say, OK, that's fine. But is this really something that happens over time, or is this just something peculiar to the particular experiment that's been run? So what do you do to see if this is really something-- a permanent effect? Well, what you can do here is you can take a monkey, and you can remove the visual cortex on one side, so the monkey can see fine in half the visual field. And then you can record on either side-- the intact side or the affected side-- in the superior colliculus. Everybody follow that? All right. So if you do that, what you find is this is the intact side, and this is the side where there's no V1. And this was taken months after the area V1 was removed. And here you see these red spots here, and the red spots here? That spot is the location after which you no longer could drive any cells visually. So in the intact side, you can drive it to very deep layers of the colliculus. They had responded very well to visual stimuli. But on this side, they stopped responding here. So none of these deeper layers could be activated by visual stimuli because those deep layer, indeed, are driven by visual inputs from the downflow from the cortex. So that then is the basic rule of what has happened in the superior colliculus. All right. Now let's see-- the next big question we can ask is this downflow from the cortex that drives these important intermediate layers, which are involved in eye movement generation, to what degree are those cortical tactile cells driven by either the parvocellular or the magnocellular layer of cells, meaning by the midget cells, or the parasol cells. Well how do you do that? I've already told you about experiments in which what you can do is you can put a recording electrode, in this case in the colliculus. And then you can inactivate either the parvocellular or the magnocellular of the lateral geniculate nucleus, while you record in the superior colliculus. So what happens is that these cells then, obviously as shown here, project to V1, and then layer five cells project down. So what we are going to identify then is what is the nature of these cells in layer five that drive down there. So first of all, what this shows is that if you record from-- in this case, from two cells in the geniculate shown-- sorry, two cells in the colliculus shown, this is under normal conditions, and this is when the parvocell layers are blocked. Here's another cell-- normal condition, and then magnocellular layers are blocked. So this clearly shows-- and this was done with dozens, and dozens, and dozens of cells-- that blocking the parvocellular layers, meaning blocking the midget system, had no effect on the cortical tactile driving of cells, whereas magnocellular block eliminated it. So that meant that these cells in layer five that project to the superior colliculus are driven exclusively by the parasol system. And indeed then, that was also done-- now let me go back to that-- that was also done in another set of experiments showing that those cells in the cortex in layer five, that you could drive from the geniculate by backdriving it if you then block the midget or parasol cells, those cells would stop firing. So those were indeed cells that were driven exclusively by the parasol system. So the parasol system plays a very important role in getting-- in driving the colliculus. But doing it so only through the cortex, not directly. so to then put this in an easy to remember the diagram, is to say that when you come to these high level animals, what happened is that due to encephalization, the cortex gained control over the superior colliculus, except the superficial gray. Everything else in the colliculus is controlled by the cortical downflow. And that, of course, became very important because when you make a decision as to where you're going to look next, where you're going to identify an object, you have to have information from the cortex itself, which then has to be transferred, in this case, to the colliculus to generate the desired eye movement, so that you can analyze in detail what you want to analyze. So that then is the basic layout. And so now what we can do is we can expand some more on this diagram, namely-- once again, let's go through all this. We have the brain stem here, which is a rate code. And then we have a superior colliculus. And that code that we're going talk about there, that I've already described, you can refer to that as the vector code. And then what we can see here-- some of this work I've shown you before-- is that we have these three systems from the retina that we talked about now. The midget, the parasol, and the W system, or the corneal cellular system, that goes up the cortex. And then the downflow from V1, we now know, is driven exclusively, it seems, by the parasol system. However, remember also that when we go to higher cortical areas-- V2, MT, and V4, and so on-- these areas, some of them get a mixed input from two systems, like V4 and the temporal lobe. And MT, on the other hand, is dominated by inputs from the parasol system, but it's not exclusive. So in the end, the colliculus also gets an input from the midget system. But the heaviest input to it is indeed from the parasol system. So that then is what the diagram looks like at this point. And so what we are going to do next time, we're going to expand on this connection in which we are going to look at some other cortical structures-- namely I've already referred to that a few times-- the frontal eye fields and the medial eye fields, which are regions in the cortex that control eye movements. And that's what we're going to look at in much more detail. So now I'm ready to summarize what I told you so far. First of all, there are several classes of eye movements one can distinguish. Those are vergence eye movements and conjugate eye movements. And the conjugate eye movements come in two major types, which we call the smooth pursuit type and the saccadic type. Then we're going to talk about eye movements as a result of the 6 extraocular muscles that we have. And important to note that these extraocular muscles are such that the fibers in the muscle run the entire length of the muscles. They're not segmented, and that they are innervated by the third, fourth, and sixth cranial nerves. Then we say that the discharge rate in neurons of the final common path is proportional to the angular deviation of the eye. Saccade size is a function of the duration of the high frequency burst in these neurons. And if you look it up and down eye movements, as I've shown you, if the more the muscles contract that are the inferior recti, the more the idea deviates downwards. And because it's reciprocal, as the eye deviates more and more, the superior rectus is activated less and less. So we have this proportional arrangements. It's as beautiful as I've shown you-- a linear system that defines very accurately the degree of deviation of the eye as a function of the frequency of the neuronal activity. Then the superior colliculus codes saccadic vectors whose amplitude and direction is laid out in an orderly fashion and is in register with the visual receptive fields. I told you that in the anterior colliculus, the receptive fields are very close to the fovea. And then you go to the back of the colliculus, the progressively further peripheral, and medial is up, lateral is down. It's a nice orderly arrangement. And you have an arrangement where the visual receptive fields and the motor fields are in register with each other. And every time there's activity, there's a particular receptive field location, the result is that a saccade will move the eye into the receptive field, thereby nulling the retinal error signal. And then the retinal input to the superior colliculus comes predominantly from w-like cells. But the cortical downflow from V1, which comes from layer 5, is driven by the parasol system. But we should not neglect the fact-- I'm not sure if you're-- neglect the fact that they are other pathways to which the midget system can also make maybe a more limited contribution in generating eye movements. And as a diagram I had shown you, I demonstrated that the superior colliculus is on the cortical control in higher primates. So that then is the essence of what I wanted to cover today. And then next time, we are going to talk about-- let me see, do I have a-- we're going to talk about the cortical areas involved in saccadic eye movements control, which we will then highlight the fact that this incessant activity of us moving the eyes is incredibly elaborate and complicated. There are millions of neurons involved in doing it. And yet, it's such an incredible system that it's something we never even think about. Luckily, it's a kind of system-- partly because of the fact that there's no wait involved in the muscles, and that the fibers run the entire length of each of the extraocular muscles-- it is a system that one has been fortunate enough to understand reasonably well in how it works in the brain. So that, in essence, is what I wanted to cover today. If any of you have any questions, anything that-- I know this is kind of complicated, but I hope that you will understand this reasonably well. And if it's not clear to you, I will be very happy to answer any questions that you might have. Well, I was reasonably clear then. So next time-- here's my to-do-- the cortex, heavily. I will also show you some interesting movies about that. And I think you're going to find that it's a really fascinating story of how the cortex has evolved to generate more and more areas that are involved in an incredible number of functions in the generation of eye movements, as well, of course, as in processing visual information. OK. Well, thank you very much.
MIT_904_Sensory_Systems_Fall_2013
22_Auditory_cortex_1_Physiology_and_sound_localization.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. I guess we'll get started. So last time we were talking about the auditory midbrain, where the big structure is the inferior colliculus. And we talked about its inputs coming from the superior olivary nuclei, like the lateral superior olive, which has neuron sensitive to enter oral level differences, and another input being the MSO, where the neurons insensitive to binaural stimuli with interaural time differences. And those inputs converge on the inferior colliculus. And in today's lecture, we have a little slide of the inferior colliculus. And the big part of the inferior colliculus, diagrammed here is this central nucleus, the ICC. And that structure has been well studied, part because it's the big part. It's easy to record from, it's deep in the brain. It has a strong tonotopic organization. We talked last time about neurons there having time intensity trading because they get in some cases, input from both LSO and MSO. We talked about coding for stimulating head precedence like effects in the inferior colliculus, and we talked a little bit about sound localization in there in the mammal, at least for the parts of the inferior colliculus that had been studied, there being no really good or obvious map of sound location to place in the colliculus, and we also talked about some of the other edge regions of the colliculus like the external nucleus of the colliculus that has been explored very nicely in the bar now. And we talked about its projections up to the optic tectum, or the tectum in the bar now, which is the analog of the superior colliculus in the mammal. And there's this beautiful mapping of auditory space in the barn owl at that position in the brain. Now, such mappings are not found in general in the auditory system of mammals, with the one exception being in the deep layers of the superior colliculus. There are neurons that are responsive to auditory, as well as visual stimuli. Of course, we think of the superior colliculus as being a visual nucleus. And in those deeper layers, there is a mapping of auditory space that's in line with the mapping for visual space there. So it's as if a mapping of space can be created in the mammal, but it just doesn't appear, at least in the main nuclei of the auditory pathway. The superior colliculus is really not an auditory nucleus. OK. So any questions from last time? So today, we're going to be talking about mainly the auditory cortex and how it has a number of fields in the cortex. Usually we talk about fields rather than nuclei, and these fields are tonotopically organized-- at least, several of them are. And some of them aren't, but we'll be emphasizing the tonotopic ones. There are some very interesting experiments where investigators have explored how changes in hearing can affect the tonotopic mappings in these fields so that they can be plastic. They can be molded or shaped depending on experience. So we'll talk about that plasticity. Because there are many fields, the obvious question is, well, what does one field do in your sense of hearing, and what does another field do. And we're generally not able to answer those questions. I like to think that there are Nobel Prizes waiting to be earned in this area as that function is worked out for each of the fields. But it's clear that there's a big role of one field, which we call A1, in sound localization. And we'll review that line of investigation. That if A1 is lost, an experimental animal's ability to localize sounds is completely gone. OK? So we'll talk about the evidence, how we test animals for sound localization. Then we'll talk about where A1 is in the human auditory cortex. And we'll talk finally about some very interesting imaging studies that have shown that there is a center that lights up in imaging studies when you present sounds with salient pitch. That is, very strong sensations of pitch, and it's an area near A1, but a little bit beyond it. So we'll talk about those imaging studies. And that relates to the paper that we have for assigned reading for today's lecture. So that's the roadmap for today. So the first slide is a very complicated slide that as I said, has the inferior colliculus on it. And in the second row, has the auditory thalamus on it. So the auditory thalamus is the medial geniculate. And I'm not going to say very much about the medial geniculate. Probably Peter Schiller talked a lot more about the lateral geniculate in the visual part of the course. I want to just say that there is a large part of the medial geniculate called the ventral division, indicated here. There's a couple little symbols that say v. That stands for ventral division. And there is a large part of the medial geniculate that is tonotopically organized. And so it receives a lot of input from the central nucleus of the colliculus, which is tonotopic. It is tonotopic. And it, in turn, projects 2 auditory cortical fields that are listed up there, which we'll go over in detail in just a minute. And all of those listed cortical fields, with the exception of the one that says t, all of those are tonotopically organized. So to some people than, there is a part of this auditory pathway at these higher levels that's called the tonotopic system. And that has a bulk of the nuclei. There are some other systems. And at least in this author's idea, they're labeled as diffuse and multisensory systems. We're not going to talk much about them. But they start out in other parts of the colliculus, like the dorsal part of the colliculus, and the external nucleus of the colliculus. And they pass through other divisions of the thalamus. And they either go to other auditory cortical fields, like A2 here, or they project diffusely to a whole bunch of places in auditory cortex. So we're not going to emphasize them very much at all. Just stick with the tonotopic system for now. So in the auditory cortex, there's been a lot of research done on the cortex of the cat. Cat has been a standard model for auditory cortex work since the beginning, since the '60s. And so the fields are very well known in the cat. And they're indicated in the colored shading here. And this is showing you the cat's brain from looking at its left side. So you see that the colored areas, the auditory areas, are in the part of the brain called the temporal cortex, the side of the cortex. So way up here would be the frontal part of the cortex, the frontal lobes here. Way back here, you're familiar with the occipital cortex, which would be the visual areas. And these areas are on the side, our so-called temporal cortex. Those are the auditory areas. And the cat is a nice model for cortex. Because much of the big auditory cortical field, A1 here, is on the surface. It's on a big gyrus that's accessible. You can get to it, you can record from it. You can make lesions in it. A little bit of A1 goes down into a sulcus. And those sulci are labeled by the lines. This one is labeled ps, the posterior ectosylvian sulcus. And this is the cortex with the cortex map pulled apart. So you can see the parts of the cortex that's down in the sulcus. And those are the dark pink here. And in the front part, rostral part of the auditory cortex, there's another anterior ectosylvian sulcus. And that's shown by the dark pink here. So much of A1 is on the surface, but a little bit of it dives down into a sulcus. Now, when you go with microelectrodes and record from the auditory cortex, you take off the skull, you take off the dura, and you see the cortex. And you can take your microelectrode and go right into the cortex. And of course, the cortex has layers. So you'd be going, starting in layer 1, and going down. How many layers are there in cortex? There's 6, right? OK. And you find that if you go down through all those layers and make recordings in the different layers, all of the neurons in one cortical penetration with your electrode have the same CF. So of course, we want to measure the characteristic frequency from the tuning curve, just like we do in other places. And if you do that, in these recordings you find that they all have the same CF. And that's like going down through a cortical column. So columnar organization is very important in cortex. And if we draw it from the side, this would be the surface. And you'd have these 6 layers usually labeled by Roman numerals. And your electrode would be coming down here and sampling from the various layers, single neurons at a time. Of course, you'd have to maybe record from layer 1 first, then layer 2, and then so on, and so forth. And as long as you stay within a column of cortex, the neurons have the same characteristic frequency. So looking at it from the surface here then, one column would be like a dot. You'd be looking down at the very top of the column from the capital down to the base of the column, if it were an architectural column. When you do that, you can make a CF mapping for these tonotopic fields. And you find that if you're way over here, rostral in A1, you get very high CFs. And so high CFs for a cat would be like 40 kilohertz, maybe even 50 kilohertz. An octave beyond the upper limit of human hearing. But if you move your electrode, more caudally, the CFs get lower, and lower, and lower until you reach the posterior edge of A1. And this little word, "low" means, that's where the lowest CFs in A1 would be found. And the low CFs would be like 0.1 kilohertz, 100 Hertz. If you went perpendicular to that so-called tonotopic axis, you can have recordings that go one here, one there, one there, one there. If you're moving up from ventral to dorsal, then you find they all have the same CFs. So that's an iso CF. Lamina, if you will. And it starts right here. This line right here is an iso CF lamina. This would be the highest CFs. And then as you go caudally the CFs get lower, and lower, and lower. Then, when you keep moving your electrode, more and more caudally, you find further CFs. And then the progression changes. The CFs start to become higher, and higher, and higher. That's the signal that you've entered another auditory cortical field right behind A1. And it gets its name from being behind it, behind an anatomical terminology is posterior. So this field is p, or sometimes called PAF, posterior auditory field. And if you keep going, more and more caudally. In this case, the thing takes a turn. So you go, caudally and ventrally, the CFs then start to get higher, until you approach the boundary of p with the next auditory cortical field, which is VP. Ventral posterior auditory field. Then the CFs go from high to low. Same thing happens where A1 meets the anterior auditory field, indicated by a here. Those two fields share a high frequency, high CF boundary. And then, as you go more rostral, the CFs get lower, and lower, and lower. Sometimes this organization, at the edges of the fields, is called, mirror image tonotopy in the auditory system. And where else have you guys seen this? Did you go over this in the vision part of the course where you have the mapping of the retina on V1 in the occipital cortex, right? And it has a retina topic mapping. And that has some image where the nasal part of the retina is over here, and the temporal part of the retina's over here. And where V1 abuts V2, you also have a retina topic mapping. But it's a mirror image reversal of V1. So you should have gone over that in the visual part of the course. You find such mirror image flips in the somatosensory cortex. In the somatosensory cortex, there's a mapping of the body surface. If you touch here on the body surface, a certain part of the somatosensory cortex responds. If you touch up here, a different part responds. And there's a mapping of the body surface onto the surface of the cortex. And where S1 meets S2, there's also a mapping. But it's a mirror image. OK. So this is not a surprise in terms of general cortical organization. And it's also not a surprise in the visual periphery. You have the retina topic mapping. In the auditory periphery, you have the CF mapping. And you have this nice CF mapping along the cortex. OK. And in the cat, you have these four tonotopically organized system fields. And you have several fields; A2, DPV, and T, where the tonotopy is either nonexistent, or much less obvious. And there are some challenges here to exploring responses in these other areas. For example, tuning curves in A2 can sometimes look like this. And it's very difficult then, to assign a CF to such a broad bowl shaped tuning curve. You could do it, if you really were pressed. But there's not much difference between that frequency and that frequency. So this could be an octave or more difference. And so it's hard to assign a CF to some of the neurons in these other fields. That's not true in A1. In the tonotopically organize fields in general, there's very precise, sharp frequency tuning. OK. So the tuning curves, there's a little table here. They're usually sharp in these tonotopically organized fields. Yes, there's tonotopic organization. The latency is short. The response is brisk or robust compared to response that might be called poor or insecure. That just simply means, every time you turn on a sound, and people remember, typically tend to measure histograms in response to hundreds of sound bursts every time there's a response. But here, there might be a response to the first tone burst in a train. And then the neuron might shut down and not respond anymore. So these diffuse areas are hard to investigate. Most of these cortex recordings I'm talking about have been done in anesthetized animals. And so there's always a question of, how much has anesthesia changed the response patterns? Of course, anesthesia has a big effect on these higher levels of the nervous system. So it's not always clear how much of these properties or change in these properties has been due to anesthesia. So back to the somatosensory cortex. Here's a mapping of the somatosensory cortex. And this is the surface of the body map onto the surface of the cortex that I was referring to earlier. So if you stimulate the surface of the body over here, the tips of the toes, you get a response here in the somatosensory cortex. If you stimulate the fingers, you get a response here in the somatosensory cortex. You stimulate the facial region, you get a response over here. And this distorted mapping or caricature of the body's surface, sometimes called a homunculus. And it shows you that a lot of cortex is devoted to certain important parts of your body, like the face. And less important parts, like the trunk of your body, receives much smaller representation in the cortex. In the auditory cortex, if you draw the mapping-- so here's a mapping of A1. In this case, it's from the guinea pig cortex. And in this case, the mapping is plotted with the CF on the y-axis, and the distance along the cortex on the x-axis. And these are the data. Each dot indicates a recording site from a single column in the cortex. And if you could read these numbers, they would indicate the CFs. And these lines are the ISO CF contours. And this CF axis is plotted along this distance here. So this distance is going like this, and this distance is going along like this. And there's a very nice, almost linear relationship. And it shows you something quite different from the somatosensory mapping, which is that there aren't any really important frequencies. They all have about the same representation in the cortex. It's a pretty boring, straight line, if you will. It's not as interesting as the homunculus in the somatosensory cortex. So this is true in the general mammal. Next time, when we talk about the auditory cortex in the echo locating bat, we'll have quite a different finding. There are some very important frequencies in certain types of bats that relate to the echo locating signal that they emit, and the echo that comes back to them so that they can find targets, even in the dark. But most of general mammals do not echolocate, of course. And so their mapping of the sensory cortex is pretty linear and boring, if you will. So today's reading comes from what motivates the next experiment that I'll show. For a long time, these mappings were thought to be laid down at birth and not changeable. So they were just immutable. But some very interesting experiments by Dexter Irvine and Don Robertson in the 1980s showed that was not true. And they were not the pioneers in showing that cortex can change as a result of experiment. Rather, some people who were working, especially in the somatosensory cortex, were the first. And so I didn't have a reading for today. So right before I came over, I pulled up the Wikipedia entry on the Silver Spring monkeys. So has anybody heard of the Silver Spring monkeys? So Silver Spring monkeys were in the news in the 1980s. They got their name from the Institute of Behavioral Research in Silver Springs, Maryland-- Silver Spring, Maryland. And from 1981 to 1991, they became what one writer called, the most famous lab animals in history, as a result of battle between animal researchers, animal advocates, politicians, and courts. So there was a researcher whose name was Edward Taub. And he was experimenting on the somatosensory sensory cortex. And he was taking the monkeys, and he was denervating the sensory input from certain parts of their limbs. So for example, he would cut the nerves that carried information from the middle finger of the monkeys. And he was studying to see if the somatosensory cortex remapped, and he was finding small effects. But in May, 1981, Alex Pacheco, from the animal rights group PETA, began working undercover in his lab, alerted the police to what PETA viewed as unacceptable living conditions for the monkeys. And there was a long battle. Initially, the researcher was convicted of animal cruelty, and these charges were subsequently overturned. But anyway, the monkeys were held in limbo for in some cases, many years because his research was put on hold. During the subsequent experiments on the monkeys after the court battles were all done, it was discovered that significant cortical remapping had occurred. This is evidence of the brain's plasticity, and it helped to overturn the widely held view that the old adult brain cannot reorganize itself in response to its environment. So the analogous experiments in the auditory system have been done in small animals. And maybe it's a result of much decline and use of primates in research. There was hardly any auditory work done on primates these days. This reorganization work is done in the guinea pigs. And the experiments are done like this. There's a peripheral lesion made in the cochlear. In the Guinea pig cochlear, it's very easy to make a little hole in the middle layer, and look down on the cochlear. And the cochlear's a bony structure. The most accessible part is the basal turn of the cochlear. And you can go right through the round window and see the basilar membrane, and the hair cells. And you can make a little tiny pinpoint opening in the organ of corti with a fine metal pick, and create a substantial hearing loss in one little place of the cochlear. So here is indicated a graph of the compound action potential threshold. This is a response from the auditory nerve. Action potential, obviously, is an impulse from single auditory nerve fibers. Compound means it's a recording from many, many, if not all of the auditory nerve in response to a tone burst of the different frequencies. And if you make a small lesion at the basal turn, remember the frequency organization of the cochlear is such that the basal turn processes, high frequencies. And so instead of the normal curve in the lesion animal, you have a big increase in threshold, maybe 60, or 70, or 80 dB in the lesion case. And that lesion goes from about 10 kilohertz to about 20 kilohertz. And in other parts of the cochlear, the hearing is normal. So this is a peripheral hearing loss. And now, we're going to then look in the cortex and see if the tonotopy of the auditory cortex is the same or different. And obviously, my big build up here, that there's plasticity of tonotopy, is found. This is the normal mapping that we first saw. This is the mapping in the lesioned animal. And in this case, it's a mapping where each of these dots, of course, is a recording site. These are very high CFs. You march along here, and the CF gets a little lower. But there's a big region of about 20 kilohertz-- big, long distance in the cortex-- when all you get is CFs of 20 kilohertz. Notice that that is right at the edge of the lesion. In the lesion, between 20 and 10 kilohertz, you don't get any response. Well, there's a huge hearing loss there. It's no surprise that there's no response to those frequencies. The auditory periphery is not sending you any messages, or sending you very few messages about those frequencies. Then you jump a little bit in distance. And the CF jumped from 20 down to 10 kilohertz. And there's a long region of cortex at which the CFs are all 10 kilohertz. And notice that 10 kilohertz is a very important frequency in the audiogram of this animal, and that it's right at the edge, the low frequency edge of the hearing loss. After that extensive region then, you pick up your normal tonotopy of auditory cortex. So this is clearly a massive reorganization compared to the normal of this lesioned animal's cortical mapping. So couple comments about this. If you do the mapping right after the lesion, this reorganization is not found. So it takes some time. In the case of the auditory system, it takes more than three weeks to see the reorganization. What's the mechanism for this reorganization? Well, we have input coming up to here from the thalamus, where the inputs from the 20 kilohertz place of the thalamus, did they come up here? And did they grow into a large part of the cortex where they weren't present before? And did they do that growth because that part of the cortex had gone silent because of the hearing loss? So one mechanism could be growth, if you will, sideways growth of axons. Another mechanism could be the inputs from the thalamus are coming up here. And even in the normal case, they don't just go to one place, but they have all sorts of side branches. And these side branches in the normal case are inhibited, or they're not strong to begin with. Because there's a lot of other stuff coming in here, and the other stuff is saying, you guys be quiet. I'm the main input. And maybe after the hearing loss, that other stuff is silenced. And these previously weak inputs become stronger. So you could say then that another mechanism is strengthening of preexisting inputs that are weak before the perturbation. And I wrote mechanism, ? because we don't know what the mechanism is. And it may be both. They're not mutually exclusive. And we're thinking here, cortex. This is a lecture on the cortex. But I should bring up, where is the locus of this plasticity? So in the auditory system, we have a rich array of nuclei. We have the cochlear nucleus, we have the superior olivary complex, we have the inferior colliculus. We've just learned a little about the auditory thalamus. Well, could one of those lower level nuclei have reorganized, and then just passively spit their reorganized input up to the cortex? And the answer is yes, and that actually has been looked at. So there's no reorganization in the cochlear nucleus. The part of the cochlear nucleus that processes 20 kilohertz is supposed to be normal. In between 20 and 10 kilohertz, you have a normal region. But it's completely silent in these types of hearing loss. You have a small amount of reorganization in the inferior colliculus. But it's not as big as what we see in the cortex. And maybe in the medial geniculate body, we have a larger reorganization, but probably not quite as much as you do in the auditory cortex. So there may be a little bit of reorganization, an IC, a little bit in MGB, and then further reorganization in auditory cortex. So that has been looked at. And the answer is at all of these higher levels, there's reorganization. But there's maybe more as you go higher in the pathway. Now, who cares? We usually don't have hearing losses, right? Well I should remind you that we had a big lecture on hearing loss. Then let me show you one of the slides from it. So we had this slide under the lecture on hearing loss. Well, I've given it a new title now because I was thinking about getting old this week. And the type of hearing loss that you have when you get old is called presbycusis. Presbycusis is the age related loss of hearing, especially at high frequencies. And almost all of us are going to go through presbycusis. This is a normal audiogram or threshold of hearing curve when you're young, and this is one when you get older. And invariably, we lose our high frequency hearing. The causes are not known. But clearly it takes place in the periphery, which is very similar to the lesion study we just went over. It's a peripheral hearing loss, what happens to your central pathway. It probably results in cortical reorganization in the human. And again, I can't spell. There's a missing t here. So we probably all-- and we have plenty of time. This hearing loss doesn't happen in days. It happens over the course of our lifetime. There's plenty of time to reorganize the cortex. Now, let me go back a little bit here and ask an interesting question which has some negative answers, and some positive answers. People have said, well, wow. This animal-- you have a lot of cortex. Half a millimeter devoted to CFs of 20 kilohertz. And you have a lot of cortex devoted to CF of 10 kilohertz. Does this animal do something a lot better at those frequencies than this normal animal which just has a little bitty part of cortex devoted to 20 kilohertz, and a little bit devoted to 10 kilohertz. So it's not clear what the answer is to that question yet. And people have speculated in the normal-- if you train a normal person or a normal animal to do a task at 10 kilohertz. So they're listing to 10 kilohertz over and over again. It's clearly a training effect for many tasks. You get better with training. Does that mean in the normal case that we enlarge the 10 kilohertz part of our cortex? Well, it's not known. Does that means this big area of 10 kilohertz and 20 kilohertz in the lesioned animals cortex enable that animal to do something better? That's not clear. There's evidence for both answers to those questions. Let's move on to some other properties that have been observed in auditory cortex. And we've been stuck in this mode and this course on frequency organization. It's clearly a very strong component of central auditory nuclei. And here are some tuning curves from auditory cortex neurons. You have the x-axis being frequency, and the y-axis being sound pressure level, response being inside the tuning curve. In this case, they've embellished the tuning curve a little bit by plotting the biggest, best response in real black shading. And these are the tuning curves we've been talking about in the course all along. You might see goes from the auditory nerve from cochlear nucleus and lower levels. In the auditory cortex, you see some of those kinds of tuning curves. But you also see from different neurons, other types of tuning curves. And these are seen to a greater extent in the auditory cortex, although you do see some, to a certain extent, in the colliculus, and to a certain extent, in the thalamus. So you see them in a numerically greater extent in the cortex. And here is a tuning curve that sure has a best frequency. But at that best frequency, if you get it higher and higher in level, the neuron actually stops responding at a pretty moderate, and certainly at a high level. And if you study its response inside the black shading, you can say, well, that neuron has the best frequency, or characteristic frequency. It also has a characteristic level. It likes it a lot when the tone frequency is 10 kilohertz, and the tone level is 40 dB. That's its maximal response. That's the one it prefers, if you will. Here's a different one. A best level is somewhere in the middle here. Here's a best level here in a very narrow response area. So many of the neurons in the auditory cortex has to have these not only characteristic frequencies, but also best levels. If you, at their CF, raise the sound level, you get this type of pattern. These are rate level functions. So this is the firing rate, and this is the tone level or tone intensity. And this is from a number of neurons. But just concentrate on this one. The firing rate goes up with level, reaches a maximum, and then it declines. And at the highest level, it doesn't respond anymore. So we don't know what this means. It means somehow that these neurons could tell the animal or the person, well, the sound level is x dB. They can tell you the sound frequency is a certain number of kilohertz in the sound level if they're responding maximally, is a certain SPO. It's clearly very different. Now, I brought that up because people have looked at that in terms of coding for preferred areas of space where the neuron's response is maximum. And these are some data from Clarey et al on azimuth level response areas for cortical neurons. So what does that mean? Well, they're recording from a single neuron in the cortex. And they have the animal-- in this it's, a cat-- in an anechoic room. And they move the sound source around in azimuth. We've talked about this with experiments in the [INAUDIBLE]. And they study the neuron's response as a function of azimuth. In this particular study, they also varied the sound level. And so that's what the y-axis is here. This is the sound level axis. And what we just said is that many neurons in cortex have a preferred sound level. They start to increase their firing. And that's what's meant by this shading here, the dark of the shading, the higher the firing rate. And then at high sound levels, the firing trails off, or it goes down to zero. This is a level function for these different neurons. These are recordings now from the auditory cortex within a single column. So shown here is the electrode penetration going from unit one down two unit 10. And those are indicated here, unit 1, down to unit 10. And the cortical layers are indicated by the Roman numerals 1 through 6 here. And they're recording these 10 different neurons from a given cortical column. And what's impressive about these studies is that the azimuth of each of these neurons, where it prefers in space, it is it a certain azimuth, 45 to 90 degrees. And that's as if you were recording from the left auditory cortex. 45 degrees would be over here, to 90 degrees would be straight over here. So these neurons are going to respond when the speaker is in a position over here. And they're going to respond when there's a moderate sound level. Not the lowest level, and not the highest level. So they prefer a certain sound level. And within a given column, almost all of the neurons have similar types of azimuth level functions. And remember, before we said that these all have about the same CF. So a second thing that is common to units in a given column and auditory cortex is their azimuth level response areas. And that's shown very nicely here. These type of data suggests that maybe these neurons play some kind of role in telling us where a sound is coming from. Like without this column, maybe we really wouldn't know that sound sources were located at 45 to 90 degrees over there on the contralateral hemifield. So there's been a lot of work on auditory cortex and sound localization. And I want to get into it here. So how do experimenters test behaviorally for how an animal can localize sound? Well, this is the formal way to do it. Here's an experimental animal. It's going to a speaker that emitted a sound. Before, it had been sitting in this central position in the testing cage here. Waiting maybe cued by a light that's saying, the trial's about to start. And the animal then listens. And this speaker up here near b didn't emit the sound. But the speaker down here did emit the sound. And then the animal is trained to go to the speaker that it had heard emit the sound. If it does that correctly, there's a little food reward area down below the speaker, and the animal gets a food reward. And the animals are food deprived, so they're motivated to do this task. In this case, the speakers can be moved. So you have removable speakers. Or you can have an array of speakers, as indicated here. And the animal has to choose which of the several speakers emitted the sound, and go to the correct one to get the food reward. And this has been done with a variety of experimental animals. In this case, it looks like a cat. But they've also tested rats and monkeys. Now, there's a couple things you have to worry about here. You have to worry if your sound is on a long time that the animal isn't cheating. And one way of cheating would be for the animal to sit still here, and listen to the sound, and then move a little bit, maybe just by bending over, saying, OK. If I moved over here, did the sound get louder? Oh, that means the sound is coming from this side. So generally, these tests are using pretty short stimulae-- 50, 100 milliseconds. And during that period, the animal doesn't have a chance to move and sample the sound field. So what are the data in normal hearing animals? 75% correct at distinguishing which of the speakers have emitted the sound at 5 degrees. So in that case, it's when the movable speaker's just 5 degrees. With a, in this case, 500 millisecond long spectrally complex stimuli. So animals are not as good at this sound localization task. In human performance, I think back a few lectures ago, we talked about the minimum audible angle in humans being a couple of degrees. Maybe 1 degree. In this case, the animal's going to 5 degrees. So it's not quite as good. The animal can still do the task surprisingly, even if it has just one functioning ear. And how is that possible? Well, the animal's pinna. We talked about the external ear or pinna, providing you some nice spectral queues. And it looks like the spectrally complex stimuli are being used. So those spectra cues are available. But the minimum audible angle's more like 10 to 12 degrees, with just one ear with a good pinna on it. So the best performance is with two ears. Now, why am I going over this paradigm? Well, people have then taken experimental animals and studied them after lesions. And we're talking about the auditory cortex. So let's look at the results of lesioning the auditory cortex on sound localization. So looks like in this study, they're using the array of loudspeakers. The lesion is located in the auditory cortex. And this is the right side of the cortex. This is the occipital or back part of the cortex. This is the front of the cortex. So the lesion is made on the right side. When a lesion is made on the right side of the auditory cortex, the animal has problems localizing sound in the opposite hemifield. So the lesion is made on the right side. The animal doesn't know if it's this speaker, this speaker, this speaker, or this speaker that's emitting the sound as a random performance on that side. However, the animal can distinguish between the speakers in the hemifield ipsilateral to the lesion, which suggests that the intact auditory cortex is mediating that behavior. So there's a deficit contralaterally in the opposite hemifield. And as you can see from this lesion, which was located smack in the middle of A1, A1 lesions effectively knock out sound localization behavior. So these early studies suggested that A1 is critically important, and is necessary for correct sound localization behavior. And in these early studies, the lesions were actually made surgically by taking out some cortex tissue. And they became, as time went on in the mid1980s, the lesion studies became more elegant in that before the lesion was made, frequency mapping of A1 was made. And this frequency mapping is shown here. This is best frequency, or CF. Those terms are synonymous. And this is distance along the cortex. And not the entire auditory cortex, field A1, was removed, but just a particular part, just a little distance here. And it was known from the mapping what CF was affected by the lesion. And the other CFs were left intact. You can test the animal for any frequency you want to. So you can test for frequencies in the intact area, or you can test for frequencies in the lesioned area. And it was shown very clearly then by plotting performance. This is a performance axis, where downward is very accurate. This must be the number of mistakes made. So 0 is no mistakes made. At the low frequencies where the cortex is intact. At the midfrequencies, where the cortex is lesioned, performance goes to chance. And then at the high frequencies, again, where the cortex is intact, the performance gets very few errors. It's very good. So this elegant experiment shows then that sound localization proceeds by frequency independent channels. That is, the part of A1 that's responsive to low frequencies is mediating low sound frequency localization. And the part that's responsive to high frequencies is mediating high frequency sound localization. So a very beautiful demonstration. Now, these lesions were done with the techniques available at the time. And it's very simple to go in and destroy a part of cortex. And for one reason or another, people decided to revisit these lesion experiments, even though they were very convincing, done in many, many different species. They decided to revisit them with a completely different way of making a lesion. And this is the method of inactivation by cooling. And maybe many of you have heard of this. This might be the auditory cortex, the surface. This is layer 1 and the different cortical layers. The way you can inactivate the cortex by cooling is by taking a piece of tubing that has cooling fluid, or if you will, coolant. And you can put this coolant with a pump through this tube, and you can lay the tube right on the surface of the cortex. And obviously, as the coolant comes through here, it's going to cool down first the top layer of the cortex, and then the lower layers. And finally, all of the cortex. And you can assure yourself that this cooling has inactivated the cortex by doing things like recording evoked potentials. And what's elegant about the cooling experiments is that you can reverse them. So I don't know if this is a word, but instead of coolant, you can use a warmant, and restore this to body temperature. And responses come back. And what's very impressive is that these kinds of experiments can be done in animals that are actually doing a behavioral task, localizing sound. And clearly, those experiments confirm these earlier lesion experiments that if you cool A1, you get a deficit for a localization ability in the contralateral hemifield. However, they have also come up with an interesting result in that if you cool some other fields, you also change sound localization behavior. Field PAF, when cooled, also interrupts sound localization behavior. So that's posterior to A1. And a small field that's not named, but is right on top of the anterior ectosylvian sulcus, the AES. When that area is cooled, sound localization behavior's also disrupted. So I used to be able to say, anyone who performs this critical function, sound localization, well, not so sure about it anymore. Because if you cool these other fields, you have a disruption of sound localization behavior. If you cool any of the other fields like A2, of VPAF, or most of the AAF, you don't get an interruption of the behavior. So what do we take home from that? Well, it seems like there are several fields that are important in sound localization behavior. Looks like A1, p, or PAF as it's sometimes called, and region near the anterior ectosylvian sulcus are all important. And it's a little bit controversial, why the old lesions didn't actually show this. It seemed like in the old lesions, there were some studies which said, if you leave A1 intact, and you lesion all the other cortical fields, the animal can still do the task. That doesn't really fit with the cooling results, which has several fields that are important. So don't let me leave you with the idea that what these fields do is only sound localization behavior. So A1 may be involved then hundreds of other tasks related to our sense of hearing. It's also involved in sound localization. So that's the right way to think about it, that these fields probably do many things. And I think if you got the gist of what Doctor Schiller talked about in vision, he's not a big fan of this little area of cortex does this little function. And over here, this little area does this function. He's more of a believer in holistic cortex function where to do a task, you employ a lot of cortex; auditory cortex if you're doing an auditory task. And perhaps the more difficult a task is, the motor cortex you use. We'll see an evidence of that. And next time, when we talk about language processing in humans from imaging studies. So what else does cortex do besides sound localization? Oh, I forgot to talk about auditory cortex in humans, how many tonotopic fields there are. So I brought this nice model of the primate brain. And where is auditory cortex in humans? So this is a slice of the brain, as if you were to cut it like this. And look at one slice. And this is the right and left temporal lobes. So in the primate, you have actually a separate lobe of the brain called the temporal lobe. And in the temporal lobe, you have to-- the primate is a little bit more difficult to examine than the cat-- you have to pull down the sylvian fissure, which is between-- separates the temporal lobe from the parietal cortex up here, and look on the superior surface of the temporal lobe, and find the sight of A1. And on that superior temporal lobe surface, you have a little gyrus that was examined by an animus Heschl. And Heschl's gyrus in humans is the site of A1. Some humans actually have 2 Heschl's, gyri, and they have their A1 either on one or both of the Heschl's gyri. Now, looking at it from the side view, in the temporal lobe, you have the 3 big gyri. Superior temporal gyrus, inferior temporal gyrus-- sorry, middle temporal gyrus, and inferior temporal gyrus. And so A1 is on the superior surface of the superior temporal gyrus on a little bitty gyrus called Heschls. So I'm going to pass this model around. And A1 is indicated by a little piece of yellow tape there. You can take a look at it. And so that area, Heschl's gyrus, lights up very nicely in imaging studies when you present auditory stimuli. And so here's an imaging study where the imaging plane was parallel to the sylvian fissure or sylvian sulcus, and it's capturing just this superior temporal gyrus. And the plane is looking down here. And on Heschl's gyrus, you see the left and the right in this case, and just the right in this case lighting up. And you can use, of course, different frequency sounds. And change the frequencies of those sounds in a progression from high frequencies to low frequencies. And you can draw the progressions that you see in imaging signals, and that's what's drawn with these arrows. This is a MIT thesis by Tom Talavage. And he showed that there were at least 1, 2, 3, 4, 5 clear progressions of frequency sensitivity, as if you were progressing along tonotopically mapped auditory cortical fields in the human. So remember, we saw 4 tonotopically mapped fields in the cat. And here we have at least 4, perhaps 5 in humans. This one is labeled HG, that's Heschl's gyrus. And that's probably primary auditory cortex. This is a view looking down on the superior surface of the temporal lobe. And this is what's called an inflated view. So it's like taking that cortex model and blowing it up like a balloon. And so the gyri are indicated by the lighter shading, and the sulci are indicated by the more dark shading. So that's what that is. These are the dimensions here. Posterior lateral is that direction. So we have multiple tonotopically organized areas in human auditory cortex. Now, the paper that we read for today's class by Penagos et al talks about a center near A1, near Heschl's gyrus, which lights up in imaging studies when the subject is presented with sounds that have a strong cessation of pitch. So we talked about pitch a little bit earlier in the class. And this is the slide that I showed. And it has a different title now. I think it was titled something like complicated sounds, or something like that. Well, a complex sound is simply a sound that has multiple frequencies. And so the stimulate used in the paper are complex sounds in that they have multiple frequencies. Earlier, we talk about this in terms of the context of musical sounds. Musical sounds almost always have a fundamental frequency, and then a whole bunch of harmonics. And to have a strong sensation of pitch, these musical sounds have a very tight relationship of the fundamental and the harmonics. They can't be a random relationship. They actually have to be multiples of the fundamental. For example, this complex of tones, 100 hertz, 200 hertz, 300 hertz, 400, and so on, are multiples of one another. But if you had the fundamental be 100, the next harmonic be 150, the next harmonic be 190, the next harmonic be 230, where they're not multiples of one another, that stimulus would not have a strong pitch associated with it. These musical sounds are interesting because they have a strong pitch. The pitch is almost always related to the lowest, or fundamental frequency of them. And the pitch is very invariant. As long as you have this nice pattern of harmonics that are related to each other by multiples, the pitch of this, no, which I think is a piano note, and the pitch of this note, which-- let's do these two. This is a guitar sound, and this is an alto saxophone sound where the fundamental is the same. The harmonic amplitude is completely different. But we recognize them as playing the same pitch. You can play a lot around with the amplitude of these harmonics. I drew them all the same. But clearly, they can be any jumble of pattern, as long as they're multiples of one another. You hear this as having the same pitch as that. And pitch is very invariant to things like where the sound is coming from. Pitch is invariant to how high and level the sound is. So pitch is a very fundamental attribute of the sound. Defined in the psychophysics textbook is, pitch is that attribute of auditory sensation in terms of which sounds may be ordered on a musical scale. So this one's low, this one's middle, and this one's high. And so when cochlear implant users are programmed first, they take this electrode, and they stimulate it. And the user says, yeah, that sounds like a low one. Then they activate the next electrode. And the audiologist says, is this one higher, or is this one lower? And if it's higher, then they route their speech processor, higher frequencies, into that electrode. So they do a pitch ranking in an auditory implants. Now, pitch of the complicated sound depends strongly on the fundamental frequency. Everybody knows that. Wow, you can play little tricks in these stimuli. You can do something like remove the fundamental frequency. How does that change the pitch? Well, this guy becomes the new fundamental. That's what you might think. But actually, removing this fundamental is just like playing around with the amplitude of the higher frequencies. It doesn't change the pitch at all. And this is called the missing fundamental. And that's actually lucky for cheap speakers that might not have a very good base. The fundamental is hardly there at all, but the piece still sounds musical, and it's not changed a lot. Or why does that happen? Well, this very nice multiples of 100 is still present. And so the temporal pattern of all these harmonics, if you add them up and look at this thing in the time domain-- remember, this was a graph frequency. Whatever this looks like, it's going to repeat after 10 milliseconds. Because it's period is still 100 hertz. I'm not a very good artist, but it's going to be the same. And it's going to be the same here. So each 10 milliseconds, it's going to repeat its pattern. It has the same regularity, even if you remove the fundamental. Now, you may not believe me. So let me give you a demonstration. And in this demonstration, there's a whole bunch of harmonics presented at first. And then in the second presentation, they remove the fundamental. And you should listen to see if the pitch that your ears hear changes at all. On the second presentation, they remove this next harmonic, and so on, and so forth. And I think they end up with removing four different harmonics after they present the complete stimulus. So listen to this demonstration of the missing fundamental. [AUDIO PLAYBACK] -Pitch of the missing fundamental, or virtual pitch. You will hear a complex tone with 10 harmonics, first complete, and then with the lower harmonics successively removed. Does the pitch of the complex change? The demonstration is repeated once. [SERIES OF PITCHES] [END AUDIO PLAYBACK] PROFESSOR: I think it's pretty clear. Does everybody want to discuss this? So when you lose these first 1 or 2, the pitch doesn't change a great deal. But by the end of the demo, this pitch is starting to sound a lot higher. So if you move some of these around or decrease their amplitude, this one's a low fundamental. The pitch doesn't change a great deal. Now, that is what they did in the paper that we read for today. What they do is they have a complex tone with a whole bunch of harmonics. And they do a clever thing like they've done on this demonstration. They just select some of the harmonics to present to the observers in the imaging study. And the cleverness of this study is that by clever filtering of this harmonic pattern, they can give you some stimulae that have really strong sensations of pitch. Or in this one case, which I think is condition 2, a very weak sensation of pitch because of the particular harmonics they've chosen. So they have 3 stimulate. They giving you a strong sensation of pitch, and one that has a very weak sensation of pitch because of the clever way they've filtered it. And further clevering, all of these stimulae have the same regularity. They have the same regularity in terms of their temporal waveform. So the cleverness of this study then is that the temporal waveform hasn't changed in terms of its regularity. But the subject's impression of the pitch, whether it's a strong pitch or a weak pitch, has changed. So by weak pitch, I mean something that sounds like a noise, or a click. Those stimulate don't have strong sensation of pitch because they're random. They don't have this nice pattern of harmonics. So I wasn't convinced by this verbiage and the figure. So I decided I wanted to listen to these stimuli myself. And it was convenient, because I know all three authors. So Hector Penagos, when he wrote this paper, was a graduate student in the speech and hearing bioscience and technology program. Jennifer Melcher is a faculty member over at the Eaton-Peabody Lab. Her office is right next to mine. And Andrew Oxenham was a faculty member here at MIT, and has since moved to University of Minnesota. So I started asking the authors, because they're all friends of mine, if I could have the demos. And one of them said, well, it's been a long time. I'm not sure I still have them. And the other author, is the second author I went to-- I won't say who it is, said, I got them right away. So he sent them-- that author sent them to me right away. And so I have them, and I'll play them for you. Now, you'll listen to these stimuli. And what was surprising to me that I didn't get from the methods of the paper is that they don't just keep presenting the same thing over and over. The pitch actually moves around. And that's one of the nice parts of this demo is that you can actually tell that the pitch is moving around in the ones with strong sensation of pitch. Second thing that they did was, they added a little bit of background noise to these. And it turns out that when you present a whole bunch of harmonics with the speaker, the speaker will introduce a little distortion. Your ear introduces distortion. And they wanted to mask that distortion out. And the distortion is pretty low in level. And so this noise that's a continuous background is a pretty effective mask. I think you can still hear that these stimuli, in some cases, have pretty strong sensations of pitch. So I'm going to start out with condition one, which they say has a strong sensation of pitch. And you can judge for yourself, whether you hear the pitch moving around. [PITCH WITH STATIC] PROFESSOR: Maybe it will go on forever. I don't know how long it would go on for. Well, anyway, could you hear those moving around? You could rank them. Here's number 2. [PITCH WITH STATIC] PROFESSOR: It's moving around, right? Number 3. [LOWER PITCH WITH STATIC] PROFESSOR: OK. Now, to me, those have strong sensations of pitch, I believe. I'm a believer. Now, here's the last one that they say has a weak [INAUDIBLE]. [PITCH WITH STATIC] PROFESSOR: OK. So at first, I was expecting to hear no change in pitch at all. But actually, the pitches change a little bit. And so when you go back and say, it's a weak sensation of pitch, OK. Right? So we're believers? Or does anybody want to say-- all right. What happened in their imaging study? Well, this is pretty small figure. But it summarizes the results in that they had an activation of the circled area, which is near Heschl's gyrus-- just, I believe, anterior to it. And this is the place that had a high activation. In the cases of the stimuli, was strong. Psychophysical sensations of pitch, but had a low activation in the case where there was a weak sensation of pitch. Other areas of the brain-- for example, Heschl's gyrus lit up for all of the conditions. And what's interesting about this study is they examined some of the subcortical nuclei that we've been talking about. For example, the inferior colliculus and the cochlear nuclei. And these are the activations for those centers. Cochlear nucleus activates pretty much the same for all of the conditions. This black one is the one that was associated with the weak sensation of pitch. The inferior colliculus, perhaps a little bit less activation for that stimulus, but not significantly so. In the case of this center for pitched salience in the auditory cortex, clearly there's a lot less activation. For that stimulus with the weak sensation of pitch, about the same amount of activation as you'd find with noise bursts there. So that, I think, clearly demonstrates, and other imaging studies have clearly demonstrated that this area is the region of cortex that becomes very active when we experience stimuli with strong sensations of pitch. In experimental animals, recordings from, for example, the marmoset auditory cortex. You find neurons in an equivalent area that respond very nicely to harmonic complexes that have strong sensation of pitches. If you remove too many of the harmonics where the pitch changes a lot, the neurons fire less. And clearly, in those cases, remember, most neurons in the cortex are finally tuned to sound frequency. You can remove a whole bunch of these lower harmonics, and not change the response, suggesting that there is really signaling, that there is pitch associated with that stimulus rather than there's a certain kind of frequency. So that's the bottom line for this study. Let me just mention a couple things that make auditory experiments difficult when you're trying to image the brain. Has anybody listened to an fMRI machine? An imager? Very loud, right? So you have problems with the subjects listening to the stimulus that you intend to present to them rather than listening to the imaging noise itself. And so in this study, they went to great extent to try to reduce the imaging noise. The subjects were wearing protective earmuffs, and the stimulae were loudspeakers that led to those earmuffs in long tubes. Of course, you can't have a speaker right near the ear because there's a magnet associated with the speaker. So you have to have the speaker outside the imager. So they actually turned one of the imaging pumps off. There is a lot of challenge in imaging such small structures as the cochlear nuclei in inferior colliculus. And they said to improve the detection of activation in these brainstem [? structures, ?] the data were activated using cardiac triggering. So does anybody know what that means? Well, when your heart beats, it pulses on all the arteries, and actually moves the brainstem. Brainstem is so small, it can be moved. Cortex is moving too, but the areas are generally bigger. And so if you weren't going to take care of the heartbeat, the brainstem might be imaged when it was in this position at one point. And the next image might, when it was over here. So what cardiac triggering is, they record the EKG from the subject. And when they see the QRS complex, or whatever waveform from the EKG, and they say, that's the time to take the image. So they only take the image at a certain point relative to the cardiac cycle. So the brain, even though it's moving, it's always moved in this certain position. So that's a challenge that's associated with imaging small structures like these brainstem nuclei. So then, another property, at least of this field of the auditory cortex, is to process stimulae that to have high pitch salience. So we've had two functions associated with auditory cortex then, just as a summary here. One is processing stimuli with high pitch salience, and the other is processing sound stimuli that change in location. So those are the two things you can really hang your hat on, and what is done at the auditory cortex in terms of function. And for the pitch sensitive area, you have this area near A1. For the localization, you have A1, posterior field, and a field near the anterior ectosylvian sulcus, as we know currently. All right. Questions? If not, have a good Thanksgiving. Don't eat too much, or enjoy eating too much, I guess. I'll see you on Monday.
MIT_904_Sensory_Systems_Fall_2013
19_Descending_systems_and_reflexes.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, I guess we'll get started. So last time we talked about hearing loss, and deafness, and cochlear implants. Any questions? So did anyone read the Sunday New York Times yesterday? My email really lit up when this front page article showed up and it was titled, "Ground Shaking Noise Rocks NFL and Eardrums Take a Big Hit." So the reason my email lit up is because my lab director is quoted. So it's a big deal to be quoted in the Sunday Times, right? So this is talking about the NFL games where they have a noise meter down on the field and they encourage the fans to make a lot of noise. And apparently at the Seattle Seahawks recent game-- whatever-- the crowd was recorded at making 136 decibel noise in September. And so it says later on in the article, "Fans accustomed to hollering may--" is that what they do at the Seahawks games? "May scoff at the warnings as nanny state silliness. But to auditory experts, the danger is very real. People think it's cool or funny, or whatever. But there is increasing--" this is a quote "Increasing evidence that if your ears are ringing, damage is happening, said M. Charles Liberman, a professor of otology at Harvard Medical School, and the director of a hearing research lab at the Massachusetts Eye And Ear Infirmary." So he's a director of the lab where we're going to have a lab tour later. "There's something irreversible going on, and it's only going to get worse as you get older. Liberman's research shows that even in the immediate effects of noise exposure after the immediate effects subside-- the ringing, the muffling, the feeling of pressure-- ears do not really recover." So noise damage in the news. I'll pass it around. All right. So yeah, question? AUDIENCE: I just have a quick question. So when I'm down at the T, it's extremely loud. Do you happen to know how loud that is in decibels? PROFESSOR: I've never measured it on the T. AUDIENCE: I had a friend who tried. And the thing on his phone wouldn't go high enough. PROFESSOR: Yes. And a lot of folks have a sound level meter app. So how high was the maximum? AUDIENCE: The max was 100, and it more than maxed out. PROFESSOR: Yeah. Well, so I wouldn't be surprised if it's 110 dB on the T. Certain in between stops runs are louder than others as you know if you ride the T. So I think that that's not so damaging that it would hurt your hearing. But if you were the conductor or the driver on the train, you really have to worry about it. On the other hand, if you're the driver, you have to have good hearing to respond to some kind of signals. So it's not like they can just wear hearing protection. In this article about NFL fans, they said in some games they hand out those little foam ear plugs, which attenuate the noise as much as 20 dB. So if you're listening to 120, put the noise-- ear plug in and it goes down to 100. But it also said that some people, like children, can't fit the foam ear plugs in their ear canal. So it doesn't work for them. And some people, like people who work on the T, need good hearing. So today, we're going to move on and get back to the auditory brainstem. We had talked about the auditory pathway that comes up from the cochlea through the auditory nerve and into the cochlear nucleus, which is the very lowest level of the brainstem. And now we're going to talk about some higher levels of the brainstem. And especially some pathways that are called descending systems and brainstem reflexes. So we'll define what those two things are. We'll talk especially about one descending system, which is a brainstem reflex called the olivocochlear neurons. Their anatomy, their functions, and their reflex pathway. And let me just stop and tie in once again with hearing loss. These two brainstem reflexes that I'm going to talk about clearly protect the ear from damage due to high-level sound. So that is certainly one of the functions of the Olivocochlear neurons. And we'll see how that takes place. The second brain stem reflex that we'll talk about is the middle ear muscle reflex. There are two muscles in your middle ear. When they contract, they make your sense of hearing less sensitive. Why would you ever want that to happen? Well again, these muscles contract when you're in a high-level sound environment, like at the NFL game. And one of their functions is to reduce the sound getting into your inner ear. So to prevent damage from the high-level sound that might otherwise damage your hair cells. So that's the subject of today's talk. Now, what do I mean by reflexes and descending systems? Well, this is a nice diagram from Michael Slama, who shows in the solid lines the pathways going up, which we've been talking about. And sometimes those pathways are called ascending systems because they go up. So it says here in red, the solid lines are ascending pathways. So by ascending, we mean from a lower level up to a higher. And ultimately, to the highest level in the neural pathway to the cortex. There are analogous descending systems, which are shown here in dashed lines. And I think you can see some of them. For example, here's auditory cortex. And there's some dashed lines, which means cell bodies sitting an auditory cortex that project their axons down and end at the next lowest level, which is the auditory thalamus or the medial geniculate. And so that would be an example of a descending system because the information is starting at the higher level and going down to a lower level. And at every junction between nuclei you can find descending systems. And in the auditory pathway, even at the lowest level, you have a descending system that starts here in the superior olive and goes all the way out to the cochlea. And we talked about some efferent nerve endings on the hair cells and the nerve fibers. And that descending system is called the olivocochlear system of neurons because it starts out in the olive and goes to the cochlear. OK, so that's what descending systems is, starting from higher levels and going down to lower levels. Now, how does that work out with brainstem reflexes? Well, we've said that one of the functions of these olivocochlear neurons would be they prevent the cochlea from being damaged by high-level sounds. Well, when would you activate that system? Well, obviously, when you hear a loud sound. So the ascending pathway has to come into the brain, synapse in the cochlear nucleus, go up to the olivary complex, and then come back down to the cochlear to prevent the damage. And so that little loop that I just diagrammed could be called a reflex pathway. So let's define what a reflex is. What is a reflex? Anybody know? What does it mean to act reflexively? AUDIENCE: Automatic reaction. PROFESSOR: Right. So an automatic response. And certainly, these olivocochlear neurons turn on automatically. And sort of in between the lines of that definition means that the reflex pathway is operating down here in the automatic portion of your auditory pathway in the brainstem. The part where you think the auditory cortex or the other regions of cortex doesn't have to get involved. You don't have to say, OK, I'm in a loud environment. Should I respond with this reflex? OK. Yeah, maybe I will. OK, respond. You don't have to think about it. It's automatic because it happens in the parts of the brain where things happen automatically. What other things happen in the brainstem? Well, control of breathing. You have the motor neurons that come to your chest muscles and your diaphragm that enable you to breathe. Those motor neurons are located in the brainstem. And the control of respiration is this brainstem function. It doesn't have to go all the way to the cortex and have to think about it. Now, that's not to say that reflexes, like the breathing reflex, you can say, OK, I'm going to stop breathing for a minute. OK, I'm going to hold my breath. So you can, via your higher centers, say I'm going to send information down here. Maybe some of these other descending pathways come down to the olivocochlear neurons and say, I'm not going to do this reflex for a moment. Of course, you eventually have to start breathing again. But you can have higher center control of these reflexes. It's not that the higher centers aren't involved. OK, so another reflex would be your patellar tendon reflex. The physician tests when you go to your doctor's office. They hit your patellar tendon with a little mallet and see that your leg contracts. And apparently, everyone passes that test, right? You can't walk into the doctor's office without having some kind of reflex. But they're actually testing for hyper-reflex in that case. And so some of these reflexes can be in overdrive. And that's what the physician is testing for. OK, so we are really interested in what all these descending systems do. What do the descending systems do in general? Well, it actually ends up being a very hard issue to study. We know that those systems are there, but what good are they? What do they do in our sense of hearing? So the classic, old-fashioned way of studying such a system would be to go in and make a lesion and see how the animal's behavior, the person's behavior changes, or do some kind of test. But it turns out most of these descending systems are intertwined with their corresponding ascending pathway. So it's very difficult to go in and make a cut or a burning lesion that just lesions one of the descending pathways and not affect the ascending pathway, which would complicate the interpretation. The reason that some of these systems-- for example, the olivocochlear neurons-- are amenable and we know a lot about them-- are amenable to experimentation-- is we can selectively lesion and stimulate them in isolation from the ascending systems. And let me show you a diagram done of the olivocochlear system that illustrates that. So these are the olivocochlear neurons, these stars here. There are two types. The red are the so-called medial olivocochlear neurons because they're sitting in the medial part of the superior olive. The green ones are so-called lateral olivocochlear neurons because they're sitting in the more lateral part the superior olive. And this diagram shows the cell bodies of those olivocochlear neurons that are innovating this cochlea-- sending axons out to this cochlea on the right side. And that defines this cochlea as the ipsilateral cochlea. Ipsi means the same. So that cochlea gets a name. That's the ipsilateral cochlear. And you can see that these green lateral olivocochlear neurons are basically on the same side of the brain stem as the cochlea that they innervate. But these medial olivocochlear neurons are distributed on both sides of the brain. So there's a little bit difference in anatomy. Now, these are a little bit like neurons that we call motor neurons. Everybody knows what a motor neuron is, right? It's going from the brain of the spinal cord out to the muscle. That's why it's called a motor neuron. And when it fires off, it contracts the muscle that it's innervating. It turns out all the muscles on the side of the body, let's say the ipsilateral muscles, the motor neurons are located on that same side of the brain. So right-side muscles are always innervated by right-side motor neurons. So this anatomy is a little bit different than the anatomy of motor neurons. Even though these neurons are having an effect in the cochlea, they're not innervating muscles. They're innervating hair cells in this case. They do use the same neurotransmitter as muscles. So the olivocochlear neurons use the neurotransmitter acetylcholine. And so they would be called cholinergic. OK, so they synthesize the acetylcholine and their cell bodies are transported down to the axons and the nerve terminals. Out in the cochlea use acetylcholine. They release it. Now, where do they release it to? They release it to their targets. And out in the periphery, these Medial Olivocochlear Neurons, or the MOC neurons, target the outer hair cells. And they release acetylcholine directly onto the outer hair cells via their synapses. The lateral olivocochlear neurons come out to the periphery and they release the acetylcholine in their synapses on the auditory nerve fiber peripheral dendrites. OK, so the innervation is very distinct. What are the auditory nerve peripheral dendrites? Well, those are the ones we've been talking about. We've been talking about auditory nerve fibers. They start at the inner hair cells for the most part. They send messages. That's what this arrow is indicating. Send messages into the brain. When there's a sound, the membranes move. The outer hair cells are electromotile. The inner hair cells respond. They send messages, synaptic messages, to their nerve fibers. The nerve fibers spike and they send information to the brain through the cochlear nucleus. These efferent fibers are sending messages the opposite way. They're starting in the brain and going out to the cochlea. And out in the cochlea, this arrow means that the information is coming from the brain out to the outer hair cells or the auditory nerve dendrites here. Is everybody clear about that? Now, the anatomy works out so that you can do some pretty interesting things. You can make a cut of this nerve bundle, the olivocochlear nerve bundle. You can stimulate it. And right there underneath where the word "brainstem" is, there's a great place to make a cut or to electrically stimulate and activate this system. Well, why don't we use sound to stimulate the system? Well, it's a little messier. It's cleaner to activate this bundle with electrical stimulation because you can put your stimulating electron right on it and only activate that system. So that's one big advantage. You can selectively activate these olivocochlear neurons and then study, well, what the heck do they do out in the cochlea? What changes do they have when you activate them? And what changes happen when you deactivate them and making a cut in the system? And de-efferent, if you will, the cochlea. And as this list shows, there are a number of functions. And if I had to pick one of these, each of these functions has some experimental support. And I don't think I would be able to pick which is the most important. Well, sure, if you're at an NFL game, you are probably experiencing a very high sound level. And so it becomes important in that situation to protect the cochlea from damage. So it depends on the situation. But we're going to go through these. In turn, I think that since I mentioned damage, and since I don't have a slide on it, let me say how that experimental evidence arises. It's a very simple type of experiment. You take an animal. You put it in a high-level sound environment comparable to an NFL game, 120 dB. Take the animal out, you study its cochleas. You can count the hair cells. You can measure the responses. The hearing has become much less sensitive. Some of the hair cells have been killed. OK, no big deal. We went over that last time. Take a second animal. And in that animal, cut this olivocochlear bundle going out to, let's say, the ipsilateral cochlea. The olivocochlear neurons going to this other side, the contralateral cochlea, leave that intact. OK, beautiful experiment because within just one animal, you have one side's cochlea has been de-efferented. You have cut off these efferent fibers. The other side has a normal innervation. Expose the animal to 120 dB SPL. Do the same response metrics. Test the responses of the cochlea. Look at how many hair cells have been killed. You find that there is a huge difference in the two ears. Where the efference or these olivocochlear neurons have been cut, there's a lot of damage and a lot of loss of sensitivity. On the intact side, it has been protected. There is less damage and better responses. It's more sensitive. So it's a very, very nice, very elegant experiment that's been done many times. In probably both of these systems, the medials and the laterals provide such protection from damage. So we can say one of the functions then is then protection. What do I mean by these other functions-- shift the dynamic range of hearing, reduce the effects of noise masking, and reduce hearing sensitivity when paying attention to visual tasks? OK, we're going to go over those one by one. For the first one, the experiment runs like this. You record from these afferent auditory nerve fibers. And instead of cutting these olivocochlear neurons, you stimulate them. You can put a stimulating electrode right down there, right below the word "brainstem," and activate this bundle. What happens to the responses of the nerve fibers when you activate this olivocochlear system? And that's what's shown here. And this experiment has been done since the 1970s. This is an old experiment. This is the response in terms of the firing rate from the auditory nerve. And in this case, a tone is on. So you're really driving the auditory nerve fiber. So the firing rate is high. Then, during the second black bar here, you stimulate the olivocochlear neurons that are going out to the cochlea. And look what happens to the firing rate. It goes almost down to zero. When you turn that stimulation off, the firing rate comes back to about what it was before. There's been a huge inhibition of firing rate in the auditory nerve. If you plot the firing rate as a function of the tone burst level-- so now we're going to do different tones at different levels. At low tone levels where the fiber is firing spontaneously, there's hardly any effect. In mid-levels, like we saw illustrated here, this is without stimulation. This is with stimulation. There's a huge decrease in firing rate. And up at the highest levels where the fiber has become saturated-- saturation means that even though you're increasing the tone burst, you're not increasing the fire rate coming from the fiber-- there's very little effect of this stimulation. How can that help us? We have a dynamic range problem in hearing. Most auditory nerve fibers-- forget about the stimulation right now. Look at the solid curve. The dynamic range of most auditory nerve fibers has just 20 or 30 dB before saturating. That means the fiber goes up and it saturates. What happens when we get to the NFL game or in the better case, what happens when we're in a restaurant or a bar and we want to listen to the speaker across the table from us? Well, that's a pretty high sound level, 80 dB. It's not damaging, but it's high enough to saturate our auditory nerve fibers. That means when the added sound level of the person across the table from you, their voice, adds to the background, they're not going to change the firing rate of your auditory nerve fibers. And the brain won't know except by seeing the person's lips move that there is speech. It's much better to understand any kind of signals to be within this rising function of the firing rate curve, not in the saturated function where there's no change in firing rate. OK, so how can the olivocochlear system help us with this dynamic range problem? Well, I've been emphasizing the decrease of firing rates. But as you can see from this curve with stimulation of the OC neurons, the effect is to shift this function over. In this case, it's about a 20 dB shift. And now, at 70 dB, you're in the dynamic range of the function. And you're not saturated anymore as you were before. So one of the functions, we think then from this type of experiment, is the MOC efference kick in, reflexively kick in when you're in a high-level sound environment. And they shift the firing rate functions over so you can now understand a speaker's voice in a high-level background. Now, there are some other factors that change an increase of dynamic range of hearing, which I don't think are important to the course. But we have talked about two-tone suppression. This certainly does the same kind of thing. We're going to talk about contraction of the middle ear muscles, which also helps you with the dynamic range problem. So this is clearly then, one important function of the olivocochlear neurons. That is, shifting dynamic range. Now, is there experimental evidence besides what you see here looking at firing rates of auditory nerve fibers to suggest that the olivocochlear system really does this? There is a little bit. That is, with animals trained to detect changes in tones in a background of noise, they do it a little bit better when they have an intact olivocochlear system. You can't do these experiments in humans because it's very difficult to turn this system off. We don't know a way of turning it off or interrupting it. And people probably call this system into play reflexively. So it's difficult to do these kinds of experiments in humans. Now, there's another important function of the olivocochlear system, which is probably to reduce the effects of noise masking. And we haven't talked too much about masking in this course. I mean, two-tone suppression is a kind of masking. Masking is where you're listening to one sound and a second sound comes in and interferes your ability to detect that very first sound. So I have a demonstration, though, that I think will convince you that masking is very important. And it runs like this. So the demonstration is going to be you're listening to tone bursts, which are the pink things here. And they're at 2,000 Hertz, so it's going to be kind of a little bit above our 1,000 standard middle-of-the-hearing-range frequency. And it's going to give you 10 tone bursts and they're each going to be successively softer in sound level. And you're supposed to count how many steps you can hear. And I think there are 10 of them. We should be able to hear all 10 with the way I have set the level. The second part of the demo is now the signal is masked with broadband noise. So the broadband noise will come on first. Shh. And these tone pips will be on top of that noise. And you're supposed to count now how many tone pips you can hear with the broadband noise. And I think there's a little verbiage before this. Just ignore that. [AUDIO PLAYBACK] -Critical bands by masking. You will hear a 2,000 Hertz tone in 10 decreasing steps of 5 decibels. Count how many steps you can hear. Series are presented twice. [BEEPING] [END AUDIO PLAYBACK] PROFESSOR: OK, so could everybody hear most of them? 10? All 10? OK. Here's the masked. [AUDIO PLAYBACK] -Now the signal is masked with broadband noise. [BEEPING] [END AUDIO PLAYBACK] PROFESSOR: OK, how many now? AUDIENCE: Five. PROFESSOR: Five? I mean, this is not a bad illustration, right? When the pink tone bursts get within the black noise, they're pretty much disappearing. So clearly the noise was an effective masker of the tone pip. Now, we had the example earlier of the phenomenon called two-tone suppression. And that was measured in an auditory nerve fiber. And everybody should be able to draw an auditory nerve fiber tuning curve. On the x-axis, we have the frequency of the sound in kilohertz. And the y-axis, we have the sound pressure level for a response. An auditory nerve tuning curves, at least those with high CFs, look like this. And the characteristic frequency is the frequency right here. In two-tone suppression, what we had is two tones. The first tone is a probe tone, which is placed inside the nerve fibers response area. And if you look at a graph of firing rate-- so this is firing-- and you turn that tone on, the firing rate is going to go way up. Maybe 100 spikes per second because it's within the response area. Now, the second tone, sometimes called the suppressor, is put outside the response area but close to it. OK, so here's the probe. This is the probe tone, high firing rate. And now a little bit later, we're going to turn on a second tone. And this is called the suppressor. And I haven't drawn this very well, but when the suppressor goes on, the firing rate can come back down. This goes off first. And come back up. OK. So clearly, at least in the discharges of auditory nerve fibers, you can have suppressors outside the response areas that decrease the response to probes. This probe tone is going to be signaled by auditory nerve fibers close to that frequency. So many auditory nerve fibers have CFs close to 2 kilohertz. This noise has energy throughout the frequency range, if it's absolutely white noise. Some of its noise will be within the response area, but some of it will be in suppression areas. And these suppression areas can be either side of the excitatory area. And they can be big. And in some cases, they can overrule the excitation, or at least decrease it. So some of the reason you couldn't hear the tone when it was masked by the noise is because of two-tone suppression. This is sometimes called suppressive masking. There's another kind of masking that's important-- it may be equally important-- and it's called adaptive masking. And it comes from the process called adaptation. Almost all sensory systems have adaptation, which means that when you turn a stimulus on, you get a vigorous response. And even though the stimulus stays on, after a while the response dies down a little bit. And that process is called adaptation. So I've illustrated the process of adaptation for auditory nerve fibers in this next graph. Here's that pink tone burst at 2,000 Hertz we were listening to. There's the auditory nerve response to it. Right as the tone burst goes on, there's going to be a vigorous discharge, which die eyes down and becomes a smaller-- still a discharge, but a smaller discharge. And that process is called adaptation. Where do you think that process arises? OK, while you're thinking about that, I'm going to draw a picture of what's happening here. So you have the three rows of outer hair cells. We have the inner hair cell. We have the auditory nerve fiber. And we're recording here. And we're saying, we turn the sound on and you get a whole bunch of spikes from that single auditory nerve fiber. But after a few milliseconds or so, the response dies down. What experiment could you do? We don't know where that process is arising. We can't explain it. What experiment would you do to study where that comes from? Anybody? Here's the tone. Here's the response from the nerve fiber. What do we need to figure out where that adaptation is taking place? Well, how about recording from somewhere else? What about recording from the hair cell? OK, if you do that, the hair cell doesn't fire spikes, but it has a receptor potential in response to the sound. The receptor goes on and stays on. OK, where is adaptation taking place? Well, somewhere between the hair cell and the nerve fiber. The hair cell doesn't adapt, the nerve fiber does adapt. what could explain adaptation then given that? Anybody? OK, what do we have here? We have a synaptic ribbon with lots of synaptic vesicles. The tone goes on. You have a whole bunch of synaptic vesicles. You release them because the hair cell has depolarized. You have a burst of auditory nerve firing. And you can make new synaptic vesicles, right? Yeah, but it takes time. OK, so as time goes on, you've released all of these or many of them. And you can make some new ones, but maybe not quite as fast as you've released them. So you deplete your synaptic vesicles. The hair cell's still responding. There are fewer vesicles and fewer neurotransmitter released to the nerve and so the nerve firing dies out. So adaptation is often ascribed to the diminished release of neurotransmitter. And so far, that's all adaptation to a single tone. How does that explain masking? Well, if your nerve fiber responded to the noise-- at the very beginning you heard that noise come on-- shh. And then the tone came on a little bit later. If the tone is high in level, sure, it's going to still have some synaptic vesicles to release. But if the tone is very soft, the hair cell is not going to be able to release synaptic vesicles because they've already been released at the beginning of the noise. So adaptive masking is where you have, to a certain extent, run out of hair cell neurotransmitter. And there's none, or much less, left to release. Clearly, adaptive masking means that this fiber is also responding to the mask or the noise in this case. So in adaptive masking you have to have a stimulus that excites. The masker has to excite the nerve fiber and the hair cell. And the probe-- probe is always exciting it. So that is a second explanation for masking. Now in this case, the olivocochlear system can actually help you with adaptive masking. How can it help you? Well, in this case, if you also-- when you start to hear this noise, you call that olivocochlear system into play. And it acts. What does it act to do? It decreases the firing of the auditory nerve. Then when you have the tone come along, you have plenty of neurotransmitter left in the hair cell because you haven't released it all. And you have at least some to be released in response to the tone. So in this case, the third function then of the olivocochlear system is to reduce the effects of masking. And we should have said back here when I listed the functions, reduce the effects of noise masking, especially adaptive noise masking. The kind where the masker excites the fiber unlike suppressive masking where the masker reduces the response. There are certainly important and reliable studies where animals in which the olivocochlear bundle has been cut have much more of a problem with noise masking. They cannot detect signals that are buried in a noise masker as well as animals with an intact olivocochlear bundle. So clearly, that is a viable function. Now finally, and this is pretty important for our course here, there's been this fourth idea of what the olivocochlear reflex could do. And it's always been a little bit wishy-washy because there hasn't been really good experimental evidence. Until this paper-- this paper Delano et al, 2007-- is the paper that we have listed for reading today. And this paper clearly shows that another function of the olivocochlear system is when you're paying attention to something that is not auditory, like a visual stimulus, the olivocochlear system acts and reduces your sense of hearing. You're not using hearing for whatever task, so you desensitize your hearing and you pay attention to the visual system. OK, so how is this going to work? So this paper trained chinchillas as the experimental animal. And the task was to pay attention to lights. So let me get my pointer here so I can point out a little better. So this is the task in part A here. And this neutral cue is a light that's straight ahead from the experimental animal. And that goes on and says to the animal that the trial is starting. Then, that goes off and one of two targets appears. Either a left target or a right target. And these are little spots of light to the animal's left or to the right. The task then is if the left target went on, the animal goes and presses the left lever. And if it does that correctly, it gets a food reward. If the light went on on the right, the right target was illuminated, the right lever is supposed to be pressed. And if the animal does that correctly, then it gets a food reward. If it does nothing or if it presses the wrong lever, it's punished by a timeout. And the animals are food deprived, so they're motivated to do this task. That's the visual task. Now, on top of that there's some auditory stimuli presented. And they're not relevant to the task. They're just ongoing all the time. And here then in B are the target and the neutral cue lights and the response period. And the auditory stimulus going on all the time is a click or a tone burst. And it's just going on all the time. And they're making some recordings from the auditory system. And in this case, they're making a recording from the round window of the cochlea. And if you had a microscope, you could see some little blips right here in response to the clicks. And they're pretty big here. But as time goes on, they get smaller. And that's plotted right here. The response they're measuring is called the CAP, and that's the Compound Action Potential. And if you are astute here, the action potential is another word for "spikes" or "impulses." But they're not measuring from one single fiber, they're putting a big electrode on the round window of the cochlear and they're measuring the summed or compound action potential from the whole auditory nerve. It's just a convenient place to do it. You can do it in an awake animal. You can do it in awake behaving animal like we have here. When you turn on a click, almost all the auditory nerve fibers fire synchronously and you get a big response. Hardly have to do any averaging at all. OK, so that's the response, the compound action potential. And this graph plots the CAP amplitude-- how big it is. Upward on the graph is a big amplitude and lower is a diminished amplitude. And right here at zero-- they just call it zero because that's sort of the baseline before the trial even starts. So in this first bar here is the neutral OK cue. That's the thing in the middle that says to the animal, the task is starting. And right away during that neutral cue, these black dots show you that the compound action potential is decreasing. Then, the neutral cue goes off and the target-- one of the targets goes on. Either the left or right. And the compound action potential further decreases. Then, the animal makes its response here in this dashed line. OK, gets its food reward and the CAP comes back up. OK, now how do we explain this? Well, the CAP is the summed firing rate of many auditory nerve fibers. Instead of stimulation of the OC neurons decreasing the firing rate, essentially the animal itself has turned on its OC neurons and the firing rate has gone down. Or the summed response, in this case, has gone down. The animal has said to the olivocochlear system, I'm starting a visual task. I don't want to pay attention to extraneous things, like sounds. So I'm going to decrease my sensitivity in hearing by activating these olivocochlear neurons. Now, pay attention to the important targets, which are visual. Now, there are some other symbols in here that are open. They're a little bit harder to see. But there are some CAP amplitudes from another trial of these same animals. And for one reason or another, the chinchillas didn't always do the task. Every now and then they would just not pay attention to it. They wouldn't press the lever. They wouldn't respond at all. As if they were sort of spaced out, thinking about something else if you will. And in those cases, which are timeout trials. Let's see, what did they call them? Ah, omissions. This little symbol right here is omissions. So it's an omission trial where the animal didn't even do the task. In that case, the olivocochlear system is apparently not called into play at all. The animal is just not doing the task. Now, the investigators were very astute and they said, we're going to make the task a little more difficult for the animals and see if we get a bigger effect here. And the way to make the task difficult is make the targets brief. So instead of the target going on for a couple of seconds, the target went on for just a half a second. And here's a trial with just a half a second. And there was a big decrease in the auditory response. If the target went on for a long time, there was less of a decrease. And this is the effect plotted as a function of target duration. Brief targets made this job harder to do for the chinchillas. So this is clearly, at least to me, some evidence that you call into play the olivocochlear system when you're doing a visual task. So maybe as humans, we do this when we're trying to read. We're concentrating on the book, visual stimulus. And our neighbors music is going on. It's not relevant to what we're doing. Maybe we're listening to it subconsciously, but we decrease the response to that auditory stimulus because it's not important to the task. So clearly then, this is good evidence from this single study for the last function for the olivocochlear system, which is that you reduce your hearing sensitivity when attending to visual or perhaps other modality tasks. So these are the four basic functions then for the olivocochlear system. Now, before I leave the olivocochlear system, let's do a little review here on exactly how it's acting. So we said we're activating these systems going out to the hair cells and we're reducing the responses of the auditory nerve fibers. Something I haven't told you but was observed maybe 20 years after the phenomenon was first discovered is the idea of which of these systems is being used here when we electrically stimulate the system and activated it. It turns out that the medial olivocochlear neurons have big, fat myelinated axons. And you can maybe appreciate that from this drawing that these are very thick lines. The lateral olivocochlear neurons have very thin axons. And one effect of that besides how fast they conduct impulses is when you stimulate, for example, at this position here in the brainstem, it takes a huge amount of stimulating current to activate very thin nerve fibers like the lateral olivocochlear neurons. If you apply a huge stimulating current here, the current spreads to everywhere, including the facial nerve, which causes the experimental animal to twitch. So that's almost never done. More moderate levels of stimulation current are applied here. And at those levels, the predominant system activating is the medial olivocochlear neurons. Those MOC neurons send their fibers out to the outer hair cells. And we say when they act on the outer hair cells, the responses of the nerve fibers diminish. As we review, we should be able to account for that. OK, we are stimulating nerve fibers that go out to the outer hair cells here. And we're recording the responses from the auditory nerve fibers right here. Of course, these olivocochlear neurons are sending messages out to the periphery. And the auditory nerve fibers are sending messages into the brain. Somebody explain to me what's happening here. What are we doing to the cochlea to cause these responses to go down when we apply this simulation? Anybody? We're affecting the outer hair cells, right? What are the outer hair cells? The cochlea amplifier, right? It turns out that releasing the acetylcholine on to the outer hair cells decreases their electromotility. So in effect, it discusses the gain of the cochlear amplifier. If the cochlea is less amplified, the inner hair cell stereocilia are going to be bent less. They're going to release less neurotransmitter to the associated auditory nerve fibers. And we're going to measure less of a response. That is how the olivocochlear neurons-- those that go to the outer hair cells-- affect the responses of the auditory nerve fiber. So this decrease in firing rate then is a manifestation of turning down the gain of the cochlear amplifier. It was quite a mystery when this innervation was first worked out before the outer hair cells really were known to be the amplifier. How could stimulation of fibers going to the outer hair cells give you decreases in responses of fibers coming from the inner hair cells? And now that the outer hair cells are clearly associated with the cochlear amplifier, it's clear. Now, what was the loss of sensitivity when the outer hair cells were completely gone or when their prestin was knocked out? Anybody? What was the loss of sensitivity? 40 to 60 dB, right? So if you eliminated the cochlear amplifier, we had a loss of 40 to 60 dB. What kind of effect do we have from stimulation of the olivocochlear neurons? Well, the length of this arrow when brought down to the x-axis shows you how much sensitivity is lost. You could overcome this by dialing in an increase of sound pressure level to make up for it. You could titrate the effect of decreasing the gain by increasing the sound stimulus. And the width of that arrow, the length of that arrow, is about 25 dB. So instead of eliminating the cochlear amplifier, you just reduce its gain by the action of these olivocochlear neurons. The effect is about 25 dB. Which is a fairly strong effect, but you're not completely ridding the cochlea of the cochlear amplifier. You're decreasing its gain. OK, so that's kind of a review on outer hair cell function and how these olivocochlear neurons affect the outer hair cells. So you really have a controllable cochlear amplifier controlled by these olivocochlear neurons, or efferent neurons, or descending neurons coming from the brain out to the cochlea. Any questions about that? OK. If there aren't, then I'll go on to another reflex pathway. Maybe I actually do have one more thing to talk about. Right. Let me just mention this idea that this reflex pathway is kind of getting teased out by current studies. And it can be a little bit complicated, so let me show you what people are working on now in the olivocochlear reflex. Here, I think I've flipped the slide on you so that this slide shows the MOC neurons going to the left cochlea. OK. That's the ipsilateral cochlea. How do these MOC neurons get their inputs? Where do they come from? OK. Well, the red and blue are the olivocochlear neurons coming out to the cochlea and the purple are their inputs. And all of the inputs have to, of course, use the cochlea. To get this olivocochlear neurons to fire in response to sound, you have to activate the cochlea. You activate the auditory nerve. The auditory nerve goes into the cochlea nucleus then. And that's where the limits of our understanding are sort of-- we are approaching the limits of our understanding. What are the cochlear nucleus neurons that are the so-called MOC reflex interneurons that are drawn in purple here? When we talked about the cochlear nucleus, we identified a bunch of different types of neurons. We had spherical cells, globular cells. Both of those are known as bushy cells. We had stellate cells. We had pyramidal cells. We had octopus cells. Which of those provide the inputs for the MOC neurons? And that's assuming a direct pathway, which I'll give you from some other experimental evidence which I won't show you. At least one group of cochlear nucleus neurons projects to the MOC neurons. And how do I know that? Well, you can record from the MOC neurons and turn a sound on. And in less than 5 milliseconds, you can get a response. So there isn't enough time for the reflex pathway to go up to the auditory cortex, which takes about 4 or 5 synapses. And to go around there and start coming down, that takes 10 or 20 milliseconds. You get a response here very quickly. And so there isn't enough time, except for cochlear nucleus to project directly there. So which of those cochlear nucleus neurons do project? Well, we don't know absolutely for sure. But it looks like from lesion studies that probably the stellate cells in the cochlear nucleus. And especially the stellate cells in the part of the cochlear nucleus called the PVCN. And I think we talked about the ventral cochlear nucleus and the dorsal cochlear nucleus as being the big main divisions. In the VCN, there's two subdivisions, AVCN and PVCN. And the stellate cells that seem to be-- the MOC reflex interneurons are in the PVCN. How do we know that? You can make lesion studies in various parts of the cochlear nucleus. If you lesion the DCN, this reflex goes along fine. If you lesion the AVCN, it goes along fine. In the PVCN, if you make a lesion there, the reflex is interrupted. In the PVCN, there are a number of types of neurons. The ones that seem to have the right characteristics in terms of latency, sustainability of response, tuning, are the stellate cells. So it's a little bit of a squishy argument. Stellate cells seem to project to the right way. So most of the evidence is behind the idea that those particular cochlear nucleus cells are the ones that are the reflex interneurons. Now, another point of this slide is that the MOC reflex pathway is consensually organize. So what does consensual mean? What does consensual in terms of reflex mean? Not in terms of consensual sex, in terms of reflexes or brainstem organization. How does the detective in the old 1950s black and white movie come to the victim whose lying down and shine a flashlight in the eye-- how does he decide that the person is really dead? The pupil constricts if the person's still alive, right? The brainstem is still working. It's a very quick test. You don't have to have a stethoscope and listen to the heartbeat. You can be brain dead and still have a heartbeat. But if you shine light in a person's eye, the pupil constricts, right? So that is a brainstem reflects. The pupillary constriction reflex. It turns out if you shine light in the right eye, what happens to your left pupil? It constricts, right. So both pupils constrict from just a stimulus in one eye. And that is called consensual. It means in agreement. The left and the right side do the same thing in agreement. So most of these brainstem reflexes are consensually organized. And it turns out that you can look at the MOC reflex in your left ear. And obviously, if you put sound in there, that reflex is going to take place. But you can also elicit that reflex by sound in the right ear. For some reason, sound in either ear can activate the reflex. Not quite as well, labeling studies have shown that just some of the MOC neurons are responsive to sound in the ipsilateral cochlea. That is, the cochlea that they project to. And about half as many are responsive to sound in the other ear. The so-called contralateral cochlea and the so-called contra response, MOC neurons in red here. Labeling studies have shown these contra response neurons are sitting on the side of the brain as the cochlea that they innervate. But the ipsi response neurons are located on the opposite side of the brain [INAUDIBLE]. And knowing those response characteristics, then you can draw the purple pathway that's necessary to drive them. These contra response neurons respond to sound over here, so they must get this purple arrow. These ipsi response neurons are responsive to sound here, so they must get this purple pathway. And it's kind of complicated, but that's the kind of studies that people do now on these brainstem reflexes. And I want to just say one more thing. People are now working on where the neurons go in the cochlea. So these are some tuning curves from these MOC neurons. So they show sharp tuning just like the auditory nerve. And where do they go? Well, they have a very nice so-called tonotopic projection into the cochlea. That is, an MOC neuron with a high CF goes and innervates at a very basal cochlear location. and. An MOC neuron with a low CF goes and innervates closer to the apex. And this black line shows the mapping for auditory nerve fibers. Those are the ones coming out of the cochlea. And the colored dots show the mapping for the fibers going back on to the cochlea. Everything, remember, in the auditory system is tonotopically organized. So it's no surprise that that would happen. But it opens up the idea maybe that if you're interested in turning off your sense of hearing just for low frequencies, you could activate these fibers, go out to the cochlea in just the apical part, and leave the basal part. Let's say you're very interested in listening to high-frequency sounds. Leave the basal part intact. Or vice-versa. You could control the cochlea in a frequency band by frequency band manner. The anatomical substrate there is laid for that type of manipulation. You don't have to shut down the whole sense of hearing. OK, so let's move on to the second system that I want to talk about. That is, the brainstem reflexes associated with the stapedius and tenser tympani middle-ear muscles. These are two muscles that are in the middle ear. Remember the middle ear was the part of the ear that starts at the eardrum and ends at the cochlea? There are two middle ear muscles. There's a tensor tympani. This is shaded in brown here. And there's the stapedius shaded in red here. The stapedius tugs on the stapes. And the tensor tympani tugs on the malleus. But it gets its name because when it contracts, it looks like-- if you're looking at the eardrum through an otoscope, it looks like the tympanic membrane gets really tense. It looks a little flaccid before it contracts, and then it gets real tense. So it's a tensor tympani. And what happens when those muscles contract? That's shown in this next slide. They reduce the sound transmission through the middle ear. This is just simply a graph on contraction of the stapedius muscle. This is during the contraction. This is the magnitude change. So it looks like sound doesn't get through the middle ear anywhere near as well as normally. And you have about a 25 dB decrease in sensitivity. Simply what that means is that usually the eardrum moves and these bones move and they convey efficiently the sound into the inner ear. But if you tug on the muscles and contract them, then the bones don't vibrate as easily. And the sound doesn't get into the inner ear. So this is again, like the olivocochlear system, an inhibitory system in that when it's activated it reduces your sensitivity of hearing. Now, how are these muscles controlled? Well, they're controlled by neurons coming from the brain. But in the case of muscles, of course, they're always on the same side of the brain as the muscles they innervate. So the left stapedius muscle is controlled by motor neurons on the left side of your brain. And the stapedius muscle motor neurons run in the seventh cranial nerve. And what's the seventh cranial nerve? Anybody? We have the eighth cranial nerve is auditory and vestibular. What's the seventh? Anybody? AUDIENCE: Facial. PROFESSOR: Facial, right. So the stapedius axons go in that cranial nerve. The stapedius motor neuron axons, should say. And for the tensor tympani, it's the fifth cranial nerve. Tensor tympani. So some very interesting experiments have been done in people who have a compromise of their seventh cranial nerve. Does anybody know what Bell's palsy is? Does anybody know what a palsy is? Palsy means your motor neurons aren't working so well. So what happens? In Bell's palsy, the seventh cranial nerve innervates facial musculature. So if you have a problem, a cut seventh nerve or a Bell's palsy, which is a viral infection of the seventh cranial nerve on one side, one side of your facial features droop because the muscles there, which ordinary keep your muscles in good control of your face, they're not working anymore. So a person with a Bell's palsy has a droopy face on that side. The other side is fine because the other seventh cranial nerve is working just fine. Or it usually is. It's very often the case that it's unilateral. In Scandinavia where you can do experiments on humans, or you could at least do experiments on humans to a much greater extent than in the United States, what some enterprising researchers did was they took people who had Bell's palsy and they said, well, you can go and work in your ordinary job in the automobile factory where it's really loud. They tested their hearing before they went to work and they tested their hearing at the end of the work day. They tested it in the left ear and in the right ear. Let's say the right ear was the palsied ear. And then, left ear, which was normal, person came out at the end of the workday and their hearing was just right. In the palsied side, at the end of the day they had a temporary threshold shift. That means that if they're hearing was down at 0 dB at the beginning of the day, their thresholds were elevated at the end of the day. And hopefully, it's temporary. That is, if you sleep all night and things recover, you're back to normal the next day. But what had happened-- this is a beautiful experiment because it's controlled-- left-right in the same individual. So whatever that person did-- took drugs, or got infections, or whatever-- was hopefully bilateral. It's a human, so you can test very well thresholds of hearing. Presumably then, the stapedius muscle, which was not working on the palsied side, couldn't protect the ear from the high levels of sound in the work environment. And there was damage-- hopefully, temporary-- to that person's hearing. So clearly, the contraction of these muscles, which makes sound not go through the middle ear as well, is of beneficial effect. For example, they protect the cochlea from damage. People who are at the NFL games are having their middle ear muscles contract. Certainly, the stapedius. And probably the tensor tympani. They probably reduce the effects of noise masking. So that is a little twist here in terms of the spectrum of where these muscles act. It turns out when the muscles contract, both the stapedius and the tensor tympani, they affect low frequencies to a much greater extent than highs. For example, here you have a reduction in transmission through the middle ear of 25 dB at the lowest frequency and 0, or even actually, improved transmission at the high frequencies. And if you've ever been in a car and you accelerate to go on the highway, you know the low rumbling of the sound of the car means you have to turn the radio up when you want to enjoy your music because the low rumbling of the car, these low frequencies tend to mask the interesting high frequencies or mid-frequencies of the music that you're trying to listen to on the radio. So low frequencies tend to mask mid and highs very effectively. And so if you decrease the transmission for the low frequencies, you reduce the effects of noise masking. Especially by low-frequency noises. Now finally, an interesting thing that is clearly known for the middle ear muscles is that they contract just before and during when you speak. During you speak, OK? I don't know who wrote that. But anyway, when I speak, I'm contracting my middle ear muscles. And the idea there is perhaps when you're speaking, you don't want to listen to yourself. And if you're speaking in a loud voice, like 80 dB, I'm trying to project here. Or if I'm yelling at some family members, I want to not decrease my sense of hearing or damage it. So I contract my own middle ear muscles to prevent self-stimulation. After I finish my speaking or vocalization, my muscles then stop their contraction, going back to normal. And then my sense of hearing is very acute because it hasn't been damaged by my own vocalization. And later on in the course, when we talk about echolocation in bats. So bats send out this pulse of sound, which is their vocalization. And they listen for an echo. The pulse of sound can be 120 dB. They're really screaming because their targets, the insects that they hunt, are very, very small. And there's not much physical surface for that to reflect off. So the reflecting sound is very small. And so they maximize that reflection by emitting a very high-level acoustic pulse. And they don't want to damage or desensitize their hearing because they're listening to very soft reflections from the echoes. So bats clearly tense their middle ear muscles right before they make this echo-locating pulse. We don't know if that's the case for the olivocochlear neurons. It just hasn't been investigated. It's very hard to measure their effects during vocalizations. But perhaps they do. Certainly, the middle ear muscles do. OK, that's all I wanted to say so we're sort of out of time. Any questions? And I want to make one announcement about Wednesday's class. So in Wednesday's class we're going to be talking about sound localization, and listening to sounds that differ in intraural timing and level difference. And so the usual demonstrations that I play aren't going to work very well because we want to manipulate just one of those cues. So we're going to be listening in headphones. So I have some great demos. So please, if you can, bring some headphones or earbuds. And maybe download the demos from the course website. I have a few players that I can also circulate around. But if some people have these demos on their laptops, that would be more convenient. OK? See you on Wednesday.
MIT_904_Sensory_Systems_Fall_2013
16_Auditory_nerve_psychophysics_of_frequency_resolution.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Last time we were talking about the hair cells. And there's a picture of a hair cell here. And what did we do about the hair cells? We talked about the two types, inner hair cells-- plain old, ordinary receptor cells. And we talked about outer hair cells, which have this wonderful characteristic of electromotility. When their hair bundle is bent back and forth, their internal potential changes. When they're depolarized, the cell shortens. And somehow this mechanical shortening adds to the vibrations of the organ of Corti and basilar membrane that were set up by sound. And it amplifies those vibrations so that the inner hair cells then are responding to an amplified vibration. And the outer hair cells are then dubbed by the name of the cochlear amplifier. Without the amplification, you lose 40 to 60 dB of hearing, which is a big amount. A large hearing loss would result if you didn't have outer hair cells or without their electromotility, which were shown by deleting the gene for Preston and testing the knockout mouse, which had a 40 to 60 dB hearing loss. So questions about that? Before we talked about the two hair cells, we talked about vibration in the cochlear, what of tuning curve is. In that case for tuning of the basilar membrane, a particular point along the cochlear-- so you could bring your measurement device to one particular place and measure its tuning in response to sounds of different frequencies, and show that a single place vibrates very nicely to a particular frequency. That is you don't have to send in very much sound into the ear. But if you go off that particular frequency, you have to boost the sound a lot to get that one place to vibrate. And we had the example of the vibration patterns that low frequencies stimulated the cochlear apex the most, way up near the top of the snail shell. And middle frequencies stimulated the middle. And high frequencies stimulated the basal part. We'll be talking a lot more about that frequency organization along the cochlear today, when we talk about auditory nerve fibers. So here's a roadmap for today. We're going to concentrate on the auditory nerve. And I just put down some numbers so I wouldn't forget to tell you how many auditory nerve fibers there are. There are approximately 30,000 auditory nerve fibers in humans. So that means in your left ear you have 30,000. And in you're right ear you have 30,000 sending messages from the ear to the brain. So that's a pretty hefty number, right? How many optic nerve fibers do you have or does a primate have? I'm sure Dr. Schiller went over that number. We're pretty visual on animals. So our sense of vision is well developed. So how many nerve fibers go from the retina into the brain compared to this number? Anybody remember? Well, that's a good number to remember. It turns out there about 1 million optic nerve fibers from the retina into the brain. And here we have 30,000. So which is the most important sense, vision or audition? Or which sense conveys messages more efficiently, should we say? Well, obviously, primates are very visual animals. So we have a lot more nerve fibers sending messages into the brain about vision than we do audition. So I may not have given you numbers for the hair cells. In humans we have about 3,500 inner hair cells and about 12,000 outer hair cells per cochlea. OK, so those are the numbers. So today, we'll talk about the two types of nerve fibers. As we have two types of hair cells, we have two types of nerve fibers. We'll talk about tuning curves now for the responses of auditory nerve fibers. And we'll talk about tonotopic organization. That is organization of frequency to place within the cochlea, which is one of the codes for sound frequency. How do we know we're listening to 1,000 Hertz and not 2,000 Hertz by which place along the cochlea and which group of auditory nerve fibers is responding. Then we'll get away from auditory nerve and have some listening demonstrations. We'll see how good we are at discriminating two frequencies that are very close together. And we'll talk about some tuning curves that are based on psychophysical measure. That is listening. You can take some tuning curves by just a human listener. Then we'll get back to auditory nerve and talk about a different code for sound frequency. That is the temporal code for sound frequency, which involves a phenomenon called phase locking of the auditory nerve. Then we'll talk about how that's very important in your listening to musical intervals. And the most important musical interval is the octave, so we'll have a demonstration of an octave. OK, so one of my problems as I pass by the MIT Coop on the way to class, and I always buy something. So I did a reading last week. And so we'll have a little reading from this book, I Am Malala. She was the girl who was shot right and recovered and was a candidate for the Nobel Peace Prize. Maybe next year she'll get the Peace Prize. I haven't read this. I just picked it up a few minutes ago. But I went straight to the section about her surgery. So she was shot in the head on one side. And she said, "While I was in surgery--" this is after recovery. This is a further surgery she underwent. "While I was in surgery, Mr. Irving, the surgeon who had repaired my nerve--" that's her facial nerve-- "also had a solution for my damaged left ear drum. He put a small electronic device called a cochlear implant inside my head near the ear, and told me that in a month they would fit the external part on my head. And then I should be able to hear from that ear." OK, so the cochlear implant is a device that stimulates the auditory nerve fibers. And in a person who's had a gunshot wound-- either because of the loud sound or the mechanical trauma to the ear or temporal bone-- possibly the hair cells are damaged or are completely missing. And the auditory nerve fibers remain. The person is deaf without the hair cells. But the device called the cochlear implant can be inserted inside this person's cochlear to stimulate the auditory nerve. And we'll have a discussion of the cochlear implant next week when we have a demonstrator come to class who's deaf. And she'll show you about her implant. But we do need to know a lot about the auditory nerve response before we can really think about what is the good coding strategy for cochlear implant. That is how do we take the sound information and translate it into the shocks that are provided by the cochlear implant electrodes that stimulate the nerve fibers. Because little electric currents in the cochlear implant are made to stimulate the auditory nerve fibers that can then send messages to the brain. So it's just a little motivator for what's important about auditory nerve code. So we'll start out today with the hair cells. And these are the auditory nerves here. One thing that's interesting about vision and audition is the look of the synapse between the hair cell and the nerve fiber, and between the photoreceptor-- you have rods and cones in the retina. And they have associated nerve terminals here. And these are electron micrographs, taken with a very high powered electron microscope that looks at the synapse between the photoreceptor up here or the hair cell up here and the associated nerve terminal down here, or the associated either horizontal cell or bipolar cell down here. So in each case, you have obviously the synapse here is a little gap. And you have synaptic vesicles that contain the neurotransmitter. And they're indicated here in the photoreceptor. SV is the vesicle. And inside that vesicle is the neurotransmitter. When the receptor cell depolarizes, these synaptic vesicles fuse and release their neurotransmitter into the cleft and fire or activate their post-synaptic element. In the case of the hair cell, the auditory nerve fiber. This structure here is called the synaptic ribbon. And it's supposed to coordinate the release of the vesicles. And they call it a ribbon in the hair cell here, even though it looks like a big which ball. It doesn't look like a ribbon at all. But it's called a ribbon, because it has the same molecular basis. It has a lot of interesting proteins and mechanisms to coordinate the release of these neurotransmitter vesicles, which presumably are synthesize up here in the cytoplasm and are brought down to the ribbon and coordinated and released at the hair cell to nerve fiber synapse. So I just wanted to show you the look of the synapse in the electron microscope. So that's what it looks like. And the next slide here is this schematic of the two types of hair cells, inner hair cells and the three rows of outer hair cells, and their associated nerve fibers. And I think I mentioned last time that almost all of the nerve fibers, the ones that are sending messages to the brain at least, are associated with the inner hair cells. So you can see how many individual terminals there are-- as many as 20 on a single inner hair cell. By contrast, the outer hair cells-- you can see, well, this one has three of them. But they're all coming from the same fiber, which also innervates the neighboring hair cells. So there are very few of these so-called type two auditory nerve fibers. Here are the numbers. So this total is in cats. Cats have more nerve fibers than humans, a total of maybe 50,000. About 45,000 of them are the type ones, associated with inner hair cells, and only 5,000 or the type twos associated with outer hair cells. So you can see by this ratio then that most of the information is being sent into the brain by the type one fibers, sending messages from the inner hair cells. Those axons of the type one fibers are thick. They have a myelin covering, compared to the type two fibers, which are very thin and they're unmyelinated. And actually, one of the very interesting unknown facts about the auditory system is that as far as we know, no recordings have ever been made to sample the type two responses to sound. Do they respond to different frequencies? Are they widely tuned? Narrowly tune? We don't know that at all. And it turns out that it's just very difficult to sample from such thin axons as you find in the type two fibers. So I actually have a grant submitted to the National Institute of Health to use a special type of electrodes to record from the type twos. I think it's being reviewed next week. And I hope it gets funded because then maybe I'll figure out this mystery. But it will be challenging not only because they're thin, but because there are fewer of them. So when I talk about auditory nerve fiber recordings for this class, I'm going to be talking about the type ones. That's the only kind we know of. And here is an example tuning curve or receptive field for a type one auditory nerve fiber. Now, I think Peter Schiller probably talked about single unit recordings with micro electrodes. So you have your nerve. It could be the optic nerve. It could be the auditory nerve, which is what we're talking about. You have a microelectrode, which is put into the nerve. And the tip of the microelectrode is very, very tiny. It could be less than 1 micrometer in diameter. And usually the electrode is filled with a conducting solution like potassium chloride. And the pipette that's filled with a KCL comes out to a big open end. And you can stick a wire in here and run it to your amplifier and record the so-called spikes. You guys talked about spikes, right? So you're recording the spikes, AKA action potentials, AKA impulses. And if you want to do this in a dramatic way, you send this signal also to a loud speaker and you listen to them. And maybe we'll have a demonstration at the end of the year on these. It's pretty nice to listen to that. So you put your electrode in there. And you move it around until you have what's called a single unit. And why is it called a single unit? Well, in the old days, people didn't know what was being recorded. Is it a cell body? Is it a nerve axon? Is it the dendrite? What is it ? All they knew is that coming out of the amplifier, they saw this spike. And that's what's plotted here. These are a bunch of spikes. And it's called a single unit, because most of the time when you get one of these recordings, the spikes look all the same. But every now and then you get a recording that looks like this. And this is interpreted as being fiber or axon number one. Here's another number one. And this is a second fiber that's nearby. But it's a different one. Maybe there were actually two fibers right next to each other. And you could record both of them. That's very unusual. More commonly, you just have a recording from one single unit. And the interpretation is you are sampling from just one auditory nerve fiber out of a total of 40,000. Is that clear? So such experiments are done in the auditory nerve. In this case, I think the experimental animal was a Guinea pig. And in this case, it's recordings from a chinchilla auditory nerve. So what's the stimulus? Well, this is a plot of sound frequency, sound frequency in kilohertz. And this axis, on the y-axis, is sound pressure level. So this is how loud it is, if you will. And at a very low or soft tone level, if this frequency is swept from low to high frequencies, there was hardly any spikes coming from that single unit. But if you boosted the level up a little bit. And you came to a frequency that it was about 10 kilohertz, there were a bunch of spikes produced by that single unit. Then if you boosted the level up so it was a moderate level, there were spikes anywhere from 8 kilohertz up to 11 kilohertz. All that band of frequencies caused a response. Then at the highest level, everything caused a response, from the lowest frequencies up to about 12 kilohertz, and nothing above. What's this activity out here? I said nothing above and nothing over here. Well, there's some spontaneous firing. So even if you turn the sound completely off, these nerve fibers have a little bit of activity. They fire some impulses. There's an ongoing thing. If you outlined this response area with a line-- that line is the border, say, between spontaneous firing or no firing and a response. So inside of the receptive area there's a response. And outside there's nothing. Those lines are called tuning curves. And here are a bunch of tuning curves from a chinchilla. And there are one, two, three, four, five, six different tuning curves. So what the experiment did was they moved the electrode in and got one single unit. And then they moved the electrode, let's say deeper into the nerve. And now they sampled a different neuron, a different single unit. OK, maybe got this tuning curve. Then they went deeper and sampled from this one and this one and this one and this one. And the idea that it's a different one-- well, the response is different. But also, as you move the electrode, you lost the single unit, number one. And you've maybe put it deeper, a millimeter or so. It's a huge distance. And you've got a new unit. The action potentials probably look different. That's a second. OK, so these are tuning curves there then from six different single units. And each of them comes down to a pretty nice tip. And if you take that tip and the very lowest sound level they're caused a response and extrapolate that to the x-axis, you get a frequency. And that frequency is called the CF, or characteristic frequency. OK, so CF is a very important term. You should know that the CF is the very tip of the tuning curve. And the CF is different from frequency. Frequency is whatever you want to dial in with your sound oscillator. But CF is a particular characteristic of a neuron, in this case an auditory nerve fiber, that you're recording from. And it's a characteristic that it has that you measured from it. Many of these tuning curves, in addition to having a CF and a so-called tip region, also have a tail. And in this very high CF neuron, the tail goes like this. And then there's actually, I think, something that I dashed in here, a dashed line here. And the tail continues way down here. These experiments didn't want to boost the sound level to get all the tail above 80 dBs because of possible damage. If you crank up too much sound-- just like you get a gunshot to the head is a very loud sound-- it can cause damage to the hair cells. They didn't want to do that. But you could see the tail of this response area. It's a nice tip and a nice tail. OK, now, right away we have a beautiful potential code for sound frequency. How do I know I'm listening to 8 kilohertz? Well, this nerve fiber responds very nicely, lots of action potentials. How do I know I'm listening to 1 kilohertz? Well, that same nerve fiber might respond. But I have to get the sound level to very loud level, like 80 dBs ATL. But these other guys over here with CFs of 1 kilohertz would respond at a very low sound. So then we have a code of which fiber you're listening to tells you which frequency you're listening to. It's very important. You judge an instrument, like a violin, by its combination of frequencies. A guitar has a different recombination frequencies. Male speakers generally have deeper voices than female speakers, deeper meaning more low frequencies. Female and children's voices are higher in frequency. So frequency is essential for you to identify what sound stimulus you are listening. Why do we call this a place code for sound frequency? Well, as we talked about before, different parts of the cochlea respond to different frequencies. Here is a beautiful example of the place map for auditory nerve fibers. And in this case, microelectrode recordings are done as we described before. But instead of just a plain old potassium chloride solution in the microelectrode, it's filled with a substance called a neural tracer. What are examples of neural tracers? Has anybody played around with neural tracers before? Give me some examples of chemicals that are neural tracers. Anybody? This one is a funny name horseradish peroxidose, abbreviated HRP. Another one is biocytin, OK, biotinylated dextran amine, PDA. There's millions of them, Lucifer yellow. You can tell that I'm a tracer kind of guy. I use tracers all the time in my experiments. So what you do with these neural tracers, it's convenient if they are charged. For example, horseradish peroxidose, this is a positive charge. And you can apply positive current to the pipette up here. That's going to tend to force positive charge out the tip of the electrode. You can expel a positive ion out the tip by this technique, which is called iontophoresis. And if it happens that your tip is close to or ideally inside an axon, some of that HRP is going to come out the tip of the electrode and go into the axon. And why did we pick HRP? Because it's picked up by chemical transport systems that transport things along axons. And there are several of these. There's fast axonal transport. There's slow. There's medium. There's a whole bunch of systems, because this axon is coming from a cell here. It's connected to the cell body. In the cell body you make things like neurotransmitter, because that's where you can make protein. And that neurotransmitter has to get down to the tip of the axon, which in the case of the auditory nerve is in the cochlear nucleus of the brain. So there are all these transport systems transporting things. And it just turns out that some chemicals are picked up by them. HRP is one of them. When you iontophorese HRP into a nerve fiber, it's transported to all parts of the nerve fiber, including to the cell and including out to the tip of the nerve fiber on the hair cell. So here is an example of iontophoretically labeled nerve fibers. And there's five or six of them. The recording site was here in the auditory nerve. This is a diagram of the cochlea. This is the so-called Schwann glial border, which defines the periphery and the brain. So this would be the brain. So these are the nerve fibers going into the brain. They were recorded in the auditory nerve. And you can trace them out into the periphery. Right here is the cell body of the auditory nerve fiber. Every neuron has a cell body. Most neurons have axons. The axon was what was recorded. And the auditory nerve neuron has a cell body. And it also has a peripheral axon that goes out to the periphery and contacts an inner hair cell. As we saw before, these are type one auditory nerve fibers going to inner hair cells. And it contacts usually one inner hair cell. Now, you can know exactly where that auditory nerve fiber started out by tracing it and by tracing the base of the cochlea through the spiral and all the way up to the apex. So starting at the base to the apex-- so that's 100% distance, let's say. And if this were halfway between the base and the apex, that would be the 50% distance place. This guy ending up near the apex might be 80% distance from the base to the apex. OK, does everybody see how I can make that mapping? These sausages here are the outlines of the ganglion, the spiral ganglion, where the cell bodies are of the auditory nerve fibers. So what good is that mapping? Well, before we put the HRP in, we measured the tuning curve. And we got the CF from the tuning curve. So we measured the CF. We injected the neurotransmitter to label the auditory nerve fiber. And we reconstructed where the labeled ending of the auditory nerve fiber contacted its inner hair cell. Why did we do this for five of these? Well, in the ultimate experiment, you just do it for one. But if you're getting good at reconstructing the mapping, you can tell it should be about the 50% place. And you go and find it's 51%. You know that fiber was different than the one up there. Then you make your mapping-- characteristic frequency to position of enervation along the cochlea. And here is the mapping. These are the CFs. And this is the percent distance along the cochlea from the base. So 0% distance from the base would be the extreme base. 100% distance would be the extreme apex. And you can see this beautiful mapping of CF to position, almost a straight line, until you get to the lowest CFs. And this, as usual, in the auditory system, this frequency axis, this is the CF axis now. It's on a log scale. So log frequency maps to linear distance along the cochlea. Now, if the brain hears that the 50% distance auditory nerve fiber is responding and no other auditory nerve fiber is responding, it knows it's listening to a 3 kilohertz frequency. Place to frequency mapping is tonotopic. I said that opposite. Frequency to place is tonotopic. So this is a tonotopic mapping-- frequency to place, tonotopic. And why is that important? Well, it happens in the cochlea. It happens in the auditory nerve. It happens in the cochlear nucleus of the brain. It happens in almost all the auditory centers in the entire brain, all the way up to the cortex. You have neurons or fibers responding to low CFs over here in the brain. And if you move your electrode over here, you find they're responding to mid frequencies. And if you move them over here, they're responding to high CFs. So this organization is fundamental. It starts at the receptor level in the cochlea. It's conveyed by the nerve into the cochlear nucleus. And you have these beautiful frequency-- they're actually CF organizations in the brain. So the place code for sound frequency presumes that each frequency stimulates a certain place along cochlea. And I guess, if you generalize this from the auditory system to the visual system-- if you have a particular light source, like that light over there, and my eyes are looking this way, that light is going to stimulate a particular place in my left retina and in my right retina. So you have a coding for where that light is along the place in the retina. In the auditory system, you don't have that kind of a place code. You have a place code for sound frequency. It's very different. The cochlea maps frequency. How can we use this code? We're actually very good at distinguishing closely spaced frequencies. And here is now some psychophyscial data from human listeners. We're going to get away from the auditory nerve for awhile and talk about listening studies. Here is to graph of frequency. And on the y-axis is delta f. What's delta f? Delta f is the just noticeable difference for frequency. Of course, we're talk about sound frequency. And how is the experiment conducted? Well, you have your listener. Your listener is listening to sound. And you give them a 1 kilohertz sound. And then you give them a 2 kilohertz sound. The experimenter says, does it sound the same or different? Ah, completely different. OK, 1 kilohertz sound and a 1,100 kilohertz sound, same or different? Ah completely different. OK, 1,000 hertz sound and 1,010 hertz sound. Ah, it's different. 1,000 hertz and a 1,002 hertz sound? I'm not so sure. Give it to me again. OK, 1,000 hertz sound, 1,002 hertz? Eh, it's just a little bit different. 1,000 hertz sound and a 1,001 hertz sound? Same. OK, so that's the experiment. So we have the graph here for the just noticeable difference in frequency, as a function of frequency. And at 1,000 hertz-- that's right in the middle of your hearing range-- the delta f-- well, it's hard to read that axis-- the delta f is about 1 or 2 hertz. So 1,000 vs 1,002 hertz is just barely distinguishable for human listeners. You can do that experimental a little bit differently. Instead of giving two tones, you can give one tone and vary its frequency a little. And that's kind of a pleasing sound. Does everybody know what a vibrato is on a stringed instrument? That's a plain A. But if you vibrate it a little-- that's the frequencies going back and forth. Everybody could hear that vibrato right? Even though I'm changing the frequency just a tiny bit. You could do the experiment by vibrating the frequency just one single frequency. Is it vibrating? Or is it not vibrating? And you get about the same result. That's what the second graph is. People who are tone deaf, not proficient music, don't have any hearing problems are almost always able to distinguish frequencies with a little bit of training. The training is now here's the task, that type of training. OK, so I have a demonstration here. And we can listen to this and see how good you guys are-- you know, naive, untrained listeners-- and see if we're good at distinguishing frequency. So the demonstration is a little bit complicated. So I'll go through it. It's going to give you 1,000 hertz, standard, middle of your range hearing frequency. And it's going to give you a bunch of different groups. I'm going to go through these slowly. And in each group, we have 1,000 one hertz, and 1,000 hertz plus delta f. OK, delta f for group one is 10 hertz, big frequency space. And what you're going to listen to is A, B, A, A, where is f-- 1,000 hertz-- and f plus delta f-- 1,010 hertz. And B will be first, 1,010 hertz and then 1,000 hertz. Then A, 1,000 hertz, 1,010 hertz; and another A, 1,000 hertz, 1,010 hertz, just to give you a bunch of different examples. Then group two, delta f will be a little bit harder, 9 hertz, OK, so on and so forth, down to group 10, which will be delta f of 1 hertz. Or seeing if we can distinguish 1,000 and 1,001 hertz. OK, so let's listen to this. MAN IN AUDIO: Frequency difference file for J and D. You will hear 10 groups of four tone pairs. In each group, there is a small frequency difference between the tones of the pairs, which decreases in each successive group. [BEEPING OF TONE PAIRS] PROFESSOR: That's group one. Here's group two. [BEEPING OF TONE PAIRS] PROFESSOR: OK, could everybody do the big interval, delta f equals 10? Raise your hand if you could do that. Most-- some people can. I-- it's not problem. OK, how about your limits for people who could do it. When did you-- AUDIENCE: I heard eight. PROFESSOR: About eight, OK. And what was your limit going down? AUDIENCE: Nine. PROFESSOR: Group nine or delta f? OK, so delta f of 2. I cut out about between two and three. Well, for those of us who could do it, without any training at all, you get to what the best results are-- people who have done this for days and days and practice. And this is not an ideal listening room. There's a lot of fan noise. There's some distractions too. Ideally, you'd be in a completely quiet environment, perhaps wearing headphones. But it works pretty well. I'm not sure what it says about people can't do it. And there certainly are. So I don't know if you should have your hearing tested or whatever. But for those of us who could do it, you get quickly to the best possible results. So you could do a calculation then on these. We know what delta f is. Let's say it's 2 hertz at 1,000 hertz. And let's go back to our mapping experiment. So here's 1,000 hertz CF. Let's say we're listening at the CF. And we're moving from 1,000 to 1,002 hertz, the best possible psychophysical performance. We can go up along this cochlear frequency map and say, well, what percent distance did we move from the 1,000 hertz point to the 1,002 hertz point? And I don't know where it is. Well, it's about the 70% distance place in this animal. This is a cat, of course. You can't do these kinds of studies in humans. You could map it out in human. It turns out that if you know how many inner hair cells there are-- we had that number before along the base to apex spiral-- and you know the distance you're moving, it turns out you can make the calculation, the best possible performance. You're moving from one inner hair cell to its neighbor. So it's a very, very small increment along the cochlear spiral you're moving. That increment is associated with the best possible psychophysical performance in terms of frequency distinction. OK, that's the cochlear frequency map. OK, any questions about that? Now let's go back to the auditory nerve and talk more about coding for sound frequency. So far, we've just been exploring single tone response areas. So now let's make the stimulus a little bit more advanced and talk about coding for two tones. What happens when you have two tones? Well, here is a tuning curve, plotted with open symbols here, for the kind of tuning curve with one tone we had before. So everything within this white area is excitatory. You put a frequency of 7 kilohertz in at 40 dB SPL. And the neuron is going to fire all kinds of action potentials. Now, let's put in a tone right at this triangle, called the probe tone. It's usually right at the CF. And it's above the threshold. In this case, it looks like it's about 25 dB. And it gets the neuron responding. You put that probe tone in the neuron is going to fire some action potentials. And keep that probe tone in so the neuron is firing action potentials. And put a second tone in. And the second tone is often outside the response areas. And it turns out that anywhere in this shaded area above the CF or below the CF, a second tone, as is illustrated here, will decrease the response to the probe tone in a dramatic fashion. Then, when you turn off this second tone-- sometimes called a suppressing tone-- the original activity comes back. And this phenomenon is called two-tone suppression. At first, it was called two-tone inhibition. People thought, oh, OK, there's another nearby neighbor nerve fiber that's inhibiting this first one. And they looked in the cochlea and there weren't any inhibitory synapses. OK, so that was a problem. They started calling it suppression. And they actually ended up finding it in the movement of the basilar membrane. So it's just something about the vibration pattern of the cochlea that causes the movement of the membranes to be diminished by a second tone on either side of the first. Now, why do I bring this up? Well, it's kind of interesting in a number of contexts. Two-tone suppression might be a form of gain control. If you just had the excitatory tuning curve, and you started listening in a restaurant where everybody was talking and there was a lot of sound, all your auditory nerve fibers might be discharging at their maximal rates. And you wouldn't be able to tell the interesting conversation your two neighbors were having. You wouldn't be able to eavesdrop. You wouldn't be having a conversation yourself. So two-tone suppression is a form of gain control, where the side bands reduce the response to the main band, so that not everything's being driven into saturation. That's one reason. And a second reason is you can actually use this in a sort of a tricky psychophysical paradigm to measure the tuning of human listeners. We obviously can't go into a human's auditory nerve with a microelectrode, although it's been done a couple times in surgery. But it's rare. It's easy to do a so-called two-tone suppression paradigm, where you have the person listen to the probe tone. You say to the listener, here's a tone. I want you to listen to that. I'm going to put in a second tone. I ignore that. Don't worry about it. Just listen to that first probe tone. Tell me if you can hear that original probe tone. Ah, yeah, sure I can hear. I'm going to put a second tone. Oh, I can't hear the probe tone anymore. OK the second or side tone has suppressed the response to the probe tone. And you can use that as a measure of tuning, because these suppression areas flank the excitatory area. And so here are some results from humans in a so-called psychophysical tuning curve paradigm. And these are a half a dozen or so tuning curves. Each one has associated with it a probe tone or a test tone. The task is listen to that test tone and tell me if you still hear it or if it's gone away. The experimenter introduce a second tone, a so-called masker, at those frequencies and levels. And at where the line is drawn, the person who's listening to the probe tone says, I can't hear that probe tone anymore. Something happened to it. Well, two-tone suppression happened to it. The masker masked the response to the probe tone or test tone. And look at the shapes of those tuning curves. They look like good old auditory nerve fibers. They have a CF. The CF is right at the probe. They have a tip region. They have a tail region. If you measure the sharpness, how wide they are, they're really sharp at high CFs. And they get a little bit broader as a CFs goes down. At high CFs they have a tip and a tail. At low CFs they look more like v-shape. We can go back to the auditory nerve tuning curve with those in mind. And look how similar they are. Here's high CF, tip and the tail. Low CF, just sort of a plain v ad they're wider. Human psychophysical tuning curves have that same general look. Now, remember, this is a very different paradigm. Here there are two tones. The probe tone is one of them. And the masker or the second suppressor tone is the second one. Whereas in good old fashioned auditory nerve fiber tuning curve there was just one, the excitatory tone. OK, so psychophysical tuning curves are obtained from humans in the following paradigm. We went over that. These tuning curves and the neural tuning curves from animals are roughly similar. Now, what would you expect to happen to these tuning curves and the neural tuning curves if you had an outer hair cell problem? And this is kind of the classic-- oh, yeah, you can sort of pass that around-- a classic exam question. Draw a tuning curve. So you label this with frequency. This is the sound pressure level for a response-- I don't know. We can say whatever response you want to-- 10 spikes per second. Label the CF. Here it is. This is a normal. What's the axis here? Well, the CF might be-- the threshold might be at 0 dB. The tail comes in-- let's go to our animal tuning curve just so we get this right. Oops, pressed the wrong button. OK, so the tip on this one it's about 20. The tail is coming in about 60. So we are starting down-- well, let's say it's 20. This is going to be 60 dBs SPL-- normal. Draw the tuning curve in an animal where the outer hair cells are damaged. Well, you could say, there's no response. That wouldn't be quite right. OK, remember we're-- this is the nerve fiber we're recording from, a type one. This is the inner hair cell. These are the outer hair cells. And we're saying damage them, lesion them. You could have it in a knockout animal where they had lost their Preston. OK, so the cochlear amplifier is lost. What sort of a hearing loss do have when you lose the cochlear amplifier? 40 to 60 dB, right? Well, what's this interval? 40 dB, right? And it turns out, when you record from a preparation in which the outer hair cells are lesioned, this is the kind of tuning curve you find when the outer hair cells are killed or lesioned-- a tip-less tuning curve. At least from these high frequencies that have a tip. And the lows they look more bowl shaped. But there's a 40 to 60 dB hearing loss. You're not deaf, but you have a greatly altered function. How good would this function be for telling the difference between 1,000 hertz and 1,002 hertz? Not so good, right? You need a very sharply tuned function to tell or discriminate between two closely spaced frequencies. If you have an outer hair cell problem, not only are your going to be much less sensitive, but you're not going to be so good at distinguishing between frequencies. Another way to think about it is that if there were a whole bunch of frequencies down here and your hearing aid boosted them, you wouldn't be able to listen to your characteristic frequency anymore, because these side frequencies were getting into your response area. So these are non-selective response areas, where the normal or sharply tuned are very selective. And what are they selective for? For sound frequency. OK, so the outer hair cells give you this big boost in sensitivity and sharp tuning of the tip. That's the cochlear amplifier part of the function. OK, now, how could we do this? Well, recently, within the last 10 years, you can have a [? knocked ?] out animal. But in the old days, you could lesion outer hair cells by many means. You could lesion them by loud sounds. Well, loud sounds actually end up affecting inner hair cells a little bit as well. So the preferred method of lesioning outer hair cells was with drugs. For example, kanamycin is a very good antibiotic. It kills bacteria. Unfortunately, it's audatoxic. It kills hair cells. And if you give it to animals in just the right dose, you can kill the outer hair cells, which for some reason-- it's not known-- are more sensitive to them. If you give them a higher dose, it will also kill the inner hair cell. But you can create animal preparations in which the outer hair cells are gone and the inner hassles are remaining, at least over a particular part of the cochlea. And from that part, you can record these tip-less tuning curves. OK, so that is mostly what I want to say about place coding for sound frequency. And now, I want to get into the second code for sound frequency that we have, which is a temporal code that's based on the finding of temporal synchrony in the auditory nerve. This is the so-called phase-locking. Again, we're doing the same kind of experimental preparation. We stick are recording electrode in the auditory nerve. And we record from one single auditory nerve fiber. And we measure it's spikes. Each one of these little blips is a spike. The very top trace is the sound wave form. The next trace is the response of the auditory nerve fiber. And these are supra-imposed multiple traces. And that trace is with no stimulus. So this auditory nerve fiber is obviously very happy firing long, spontaneous activity. Then let's turn the sound on. The top trace is on now. This is with the stimulus. And look at how these auditory nerve fiber impulses tend to line up at a particular phase of the sound stimulus. What's phase? Well, it's just the degrees, the sine wave, as a function of time, the sound pressure-- this is sound pressure-- and it's going through 360 degrees of phase here-- 180 degrees here. And it looks like many of the spikes are lining up around 80 degree point. So a lot of the firing is right here. Not so much firing here. Not so much firing here. And then another waveform comes along and you get some more firing about the same time. Now, one very common misconception about phase-locking is that every time the sound wave form goes through-- in this case 80 degrees-- the fiber fires an impulse. That's not true at all. Here is a single trace, showing excellent phase-locking. And there's a response to the first wave form. But then the fiber takes a break and doesn't respond during the second. And it looks like it responds on the third and the fourth. But then it takes a longer break and doesn't respond at the fifth or sixth, but it responds at the seventh, and not at the eighth or ninth, then on the 10th and 11th. So it doesn't matter. You don't have to respond in every single waveform. You can respond in one wave form and take a break for 100 waveforms, as long as when you respond, the next time it's on the same point or in the same phase in the sound wave. So typically, to get these data, you average over many hundreds or even thousands of stimulus cycles, where one complete cycle is 0 to 360 degrees. These are plots of auditory nerve firing. So this is a firing rate access percent of total impulses. This is now a time axis. So we're just saying when does it fire along the time. And the stimulus, I believe, here is 1,000 hertz, so it's the middle of the hearing range. And this is excellent phase-locking. If you were to quantify this-- there are many ways to quantify this-- but you could fit, for example, a Fourier series, to that. And you could plot just the fundamental of the Fourier series. And that's what's known as the synchronization coefficient. And plot it as a function of frequency. You could make your measurements at 1,000 hertz, which is this point on the graph. You could make them at 5,000 hertz. You could make them at 500 hertz. This synchronization coefficient ends up being between 0.8 and 0.9 for low frequencies. And then it rolls off essentially to be random firing at around 3,000 or 4,000, certainly by 5,000 hertz. So this behavior, this phase-locking goes away toward the high end of our hearing range. It just means that the auditory nerve can no longer synchronize at very high frequency. So what's going on here? The auditory nerve fiber is getting its messages from the hair cell, right? Here's the auditory nerve fiber. And it's hooked up to an inner hair cell. And it's sending messages. What are the messages? Neurotransmitter. When the wave form goes like this, the auditory nerve fiber is responding. Ah, it's getting lots of neurotransmitter. Well, that was when the stereocilia were bent one direction. Ions flowed in. The inner hair cell was depolarized. It released lots of neurotransmitter. Let's go a little bit longer in time to this bottom part of the phase curve. The stereocilia were bent the opposite direction. The ion channels closed off. The inner hair cell went back to its rest-- minus 80 millivolts, let's say. And it said, I'm not excited anymore. I'm going to shut off the flow of neurotransmitter. The auditory nerve fiber goes, oh, we're quiet. We don't need to respond. Go back to the other direction, then the stereocilia back the other way. Ah, I'm depolarized. I'm going to go to minus 30 millivolts. Ah, well, let's release neurotransmitter. Oh, wow, there's something going on. I'm going to fire. I'm going to fire all these action potentials. It's going back and forth, back and forth. At some point though, this is going back and forth so fast that this just gets to be a blur. There is a sound there. It's depolarizing the hair cell. But it can't do this push pull kind of thing. It's not fast enough. Even though there's a nice synaptic ribbon there to coordinate the release of the vesicles, it gets overwhelmed. Remember, at 1,000 hertz, this is going back and forth in 1 millisecond. And 5,000 hertz, it's going back and forth five times in 1 millisecond. That's pretty fast. And it gets overwhelmed. There's a response. There's more action potentials with the stimulus than without. But they're no longer synchronized. It gets overwhelmed. And phase-lacking goes away. We can distinguish 5,000 from 6,000 hertz very nicely when we listen. We're not using this code, because there's no temporal synchrony in the auditory nerve at very high frequencies. This is a kind of an interesting code for sound frequency, because the timing is going to be different for different frequencies. Imagine at low frequencies-- and imagine just for the sake of argument-- that the auditory nerve fiber is going to respond on every single stimulus peak. Let's say this is 1,000 hertz. And now let's say we dial in 2,000 hertz, which is going to end up going twice as fast. I'm not a very good artists here. But you can imagine that the firing is going to be twice as often, if for the sake of argument we're firing in every stimulus frequency, which may not happen. But this is kind of an interesting code. Because if you're sitting in the brain and you're getting firing very far apart, you're going to say, OK, that's a low frequency. But if you're getting firing very close together, you're going to say, oh, that's a higher frequency. So is there some little detector in the brain that's detecting these intervals? How fast the firing? Well, we don't know that. But we certainly know that a code is available in the auditory nerve at low frequencies, but not at high frequencies like 5 kilohertz. So what's the evidence that we're using one code or the other? Clearly, the place code has to provide us with frequency information at a higher frequency. There is no temporal code at those high frequencies. Down low, which code do we use? Well, we probably use both. That's another way of saying, I'm not really sure. But let me give you some data from musical intervals that might suggest that this time code is used. What are the data? We have to talk a little bit about perception of musical intervals. And we might as well start out with the most important musical interval, which is the octave. Does everybody know what an octave is? Yeah, what it is an octave? I can't explain it, but I know it. What about on a piano? You go down and hit middle C, where is the octave? AUDIENCE: The next C. PROFESSOR: The next C, right. You've even called it the same letter, because it sounds so similar. But in precise physical terms, an octave is a doubling of frequency. Whatever frequency middle C was, if you double that frequency, you get an octave above middle C. So we have some data here for two intervals, one 440 hertz-- two frequencies, one 440 hertz and another an octave above, 880. So double it. And why did we pick 440 hertz? So that corresponds to a note-- just-- yeah, right, it corresponds to A. And I was trying to think if the A is below or above middle C. I think it's above. So what's important-- you guys knew that right away. What's important about that? AUDIENCE: Orchestras tune to it. PROFESSOR: Orchestras tune to it. So can you give it to me? OK, I'll give it to you. [WHISTLES] Sorry, that's A 440. And so here's A 440 on the violin. [PLUCKS A NOTE] OK, now, how do I know that? Because orchestras tune to it. So for about 20 years, I sat in an orchestra. And the first thing you did-- [LAUGHS] OK, tune, you guys. And what instrument gives the tuning note? If you're in junior high, it's this little electronic thing. But if you're in the BSO, what instrument gives the tuning note? AUDIENCE: The violin. PROFESSOR: No, violins go out of tune like crazy. AUDIENCE: Oboe. PROFESSOR: Oboe, right, because the oboe's a very stable instrument. And if the barometric pressure goes up and the humidity goes down, the oboe's still going to give you A 440. So the A 440 is a very important musical note. And all these instruments, of course, have a whole bunch of harmonics. This string is vibrating in a whole bunch of different modes. But the fundamental, the length where the whole string vibrates is A 440. OK, so here's A 440 or approximately. Now, an octave above that is a very nice sounds. It's another A. That's A 880, the fundamental. And if I sound them together, they sound very beautiful. And in any musical culture, an octave is a very predominant interval, because it sounds so wonderful to your ear. And violinists, I can tell you from experience, practice a lot of time trying to tune their octaves perfectly. And if you've ever listened to a professionals go like this, and every time they go up and down, the octave is just beautiful. But if you've been to middle school or elementary school, it's a little different. Because sometimes when those students play an octave, it doesn't really hit to be exactly an octave. And now I'm going to give you a demonstration that's 440 and not quite 880. OK And it's not going to sound exactly the same. So here it is. And that's an interval I've listened to many times. But it's not a desired interval. It's a very dissonant interval. And what is terribly displeasing about something that's not quite an octave versus an octave. That is a question that the place code has a lot of problems with. Because, for example, along the cochlea there is a place-- it's quite near the apex-- for the 440. And then if you go more basally, there's another place for the 880. And there's a place for the 879 and 878. And those would be very dissonant. But there's no reason that those two things have any links to one another in the place code. There's a place for 1,000 and a place for 2,000. Why do they sound so wonderful together? The timing code though has an answer for that. And here is some data to show you why those two intervals [? meld ?] very good together. If you look at the spike pattern, in response to either one of these frequencies, and compute what are called the intervals between the spike-- so-called interspike intervals-- every time you get a spike, you start your clock ticking. And that interval is timed until the next spike fires. That's an interspike interval. And obviously, if this is phase-locked, these intervals are going to have a close relationship to the stimulus period. So here's a spike and here's an approximately two-cycle interspike interval. Here's a short interval, but it's one complete phase. Here's a long interval, but it's now three complete phases. Here's another three-phase interval. Here's a one-phase interval. You could make a very nice plot of the interspike interval in milliseconds, the time between the spikes. And these are the number of occurrences on the y-axis. So for 440 hertz it's the dashed curve here. And you get a big peak here that's a multiple of the period. So at 440 hertz, the sound wave form is taking about 2 and 1/2 milliseconds to go through one complete cycle. And these intervals would be firing on successive periods, which, obviously, the nerve fiber can do. But sometimes they take a break, and they fire only every other period. And that's a double of this interval. And so you have a lot of firing at about 4 and 1/2 milliseconds, a lot of firing at about 7 milliseconds, and so on and so forth. So this is an interspike interval from auditory nerve firing in response to this low frequency. Now, let's double the frequency. Now, the sound wave form is going back and forth twice as fast. And you have-- no surprise-- firing, in some cases, twice as short intervals. So here's an interval for the 880 hertz that's about 1 and 1/2 milliseconds. But here we have a firing pattern that's exactly-- within the limits of experimental error-- exactly the same as for the 440 hertz. Here we have an interval that's representative of skipping a stimulus waveform or two stimulus waveforms for 880. But here we have a peek at exactly the same as 440 hertz, because these intervals are lining up every other one for the presentation of the octave. When you put those two sounds together, you're going to get the combination pattern. And many of the intervals are going to be precisely on one another. And that is a very pleasing sensation for you to listen to. If you look at other very common musical intervals, like the fifth or the fourth, you will have many overlapping periods of interspike intervals in auditory nerve firing. And those are very common musical intervals. If you look at dissonant interval, like 440 and 870, there will be no overlap amongst those two frequencies in the auditory nerve firing. Now, let's go back to psychophysics and give you one more interesting piece of the puzzle here for why temporal codes might be important is active matches become more difficult-- and actually impossible-- above 5 kilohertz. OK, well, how does that fit in? Well, we just said that phase-locking-- because the hair cell and auditory nerve can't keep up with one another, the phase-locking diminishes for these high frequencies. And there it becomes impossible to match because you don't have this timing code in the auditory nerve. So most of musical sounds are confined to the spectrum below 3 kilohertz. If you look even at the upper limit of the piano keyboard, you're sort of right at 3 kilohertz or so. And that's probably the reason. It's a very likely reason. Now, we have a research paper for today. And I'll give you the bottom line or the take home message for that. And this is an interesting neuralphysiological study based on a psychophysical phenomenon called the octave enlargement effect. So I wasn't quite truthful when I told you that octaves are the most perfect interval to listen to, because it turns out if you have people-- if you give people a low tone and give them an oscillator. And say, dial in an octave above that, they'll dial it in and say, ah, that sounds so great. But if you look really carefully, it's not exactly an octave. It's a small deviation. What they dial in-- especially at high frequencies-- remember, you can't do this at really high frequencies, but at 2,500 or 1,500 hertz, toward the high end of where you can do it-- they dial in actually a little bit more than an octave. And they say, ah, that sounds great. But it's not exactly an octave. So this paper looked and said, what about auditory nerve fiber firing can explain this octave enlargement effect? The fact that people dial in a little bit more than an octave for the upper tone. So these are the psychophysical measurements. And they just give you previous studies. What they did was they recorded from the auditory nerve. And they looked at interspike interval histograms, like we've just been talking about. And they saw something really interesting at the very high frequencies. So they're going to especially concentrate in here. They found that the very first interval didn't match exactly what was predicted. So this is a stimulus of 1,750 hertz, so toward the right end of the graph here. And where you'd predict the intervals to happen is shown by the vertical dashed lines. And they didn't fire right on those predictions, except for the very shortest intervals. And they said, what's going on here? Well, when you get to very high frequencies, what are we talking about for these intervals? Even at 1,000 hertz, what's the time scale here? This is 1 millisecond. What problems do you get when you ask a nerve fiber to fire and then you ask it to fire again a millisecond later? That's a very brief interval. Anybody know? What is the limit of firing? Can nerve fibers fire closely spaced action potentials, you know, less than a millisecond? What's the problem that they have? AUDIENCE: Is it that they are polarized? PROFESSOR: They're hyperpolarized, right. What else? AUDIENCE: Because of the refractory period. PROFESSOR: That's right. There's something called the refractory period, which can cause them to hyperpolarize. So what's happening-- if this were a nerve cell membrane, is what happens is sodium channels can open up to allow sodium to come in and depolarize the neuron and fire an action potential. And then those channels are turned off. And potassium channels open up and allow potassium to go back and even hyperpolarize. But these channels take a little bit of time to recover. It takes a little bit of time for the sodium to turn off and get ready to fire another action potential. It takes a lot longer time for the potassium channel to close and get ready to fire another action potential. And the refractory period is the time it takes for everything to recover fully so we're ready to fire again. And in the limit, that's supposed to be about 1 millisecond. That's the absolute real refractory period. So when I was drawing here things a millisecond and less, I wasn't really being truthful. There's something called the relative refractory period, which is a couple of milliseconds. And the nerve fiber can respond, but it's not going to respond quite as quickly as before. All these channels aren't completely reset. It's going to take a little bit longer time to respond. That's what's going on in this very first peak. Remember, this first peak indicates firing at successive cycles of the sound waveform at 1,750 hertz. It's a very brief interval. And what happened is you fired. And then the next action potential you fired, but you delayed a little bit. So the interval is a little bit longer. That pushed this one up to the next peak, so that this interval was actually too short. What the brain is getting is an interval that's a little bit too short. When it hears that, it says, I want to recreate the higher frequency to be an interval that's a little too short, I'm going to dial in a little bit too high in frequency. OK, because that frequency sounds better with this too short of an interval. So go ahead and read that paper. It's a very interesting study of how neuronal firing can give you a psychophysical phenomena. That's quite interesting. And it happens especially at the high frequencies. Octave matches the low frequencies are sort of as you would predict as is auditory nerve firing. OK, any questions? If not, we'll meet back on Wednesday. And we'll talk about the cochlear nucleus, which is the beginning of the central auditory pathway.
MIT_904_Sensory_Systems_Fall_2013
7_Depth_perception.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, so today our topic is going to be depth perception, which, as I have mentioned to you before, is certainly one of the most intriguing achievements in vision. Because the impressions onto the retinal surface essentially two dimensional. And from that, somehow the brain needs to reconstruct the third dimension. And what is interesting about this also, that even in the most primitive animals, this is a must. And so annals with tiny brains also have mechanisms to be able to calculate depth from the information that comes in through their eyes. And to demonstrate that, I have here a frog, which has a tiny little brain like that, has big eyes, and this frog, for its existence, needs to know exactly where things are in depth. Because if he doesn't, he would starve to death. And so what a frog does looks something like this, very crudely. He will stick out his tongue, grab a flying insect and consume it. And because of this incredible capability, it is a well-adjusted, healthy animal in most parts of the world. Now the big question then comes up, how do we carry out these computations. What kind of mechanisms are involved in being able to compute where things are in space, either in absolute sense where it is from you and in a relative sense where one object is relative to another one. Now it turns out that this became such a serious problem, in the course of evolution, that actually several different mechanisms have evolved to make possible our ability to see things in depth. And so when one looks at this as a list, as a fairly brief list, we can make a distinction between so called ocular motor cues and visual cues. The ocular motor cues are accommodation and vergence. So if various objects are at a very distance from you, your eyes converge or diverge. And you your lens gets thicker and thinner. And that information can be utilized in a rather crude way to tell you about where things are in that relative to you. Now as far as visual cues are concerned, the very significant one we are going to talk about quite a bit, is a binocular cue, which is called stereopsis as you all know. And then we have a whole bunch of monocular cues motion parallax, shading, interposition, size, and perspective. And so we will talk about many of these to give you a sense of what it is like and to give you a sense of what various brain structures do with this as a result of extensive research that had been done in this area. So now, first of all, let's talk about stereopsis. And we talk about stereopsis, we're going to talk about the basic facts of it, and then we are going to have some demonstrations. First of all, the so called stereoscope, of whichever modern version that has been handed out to you, the stereoscope was invented in the late 19th century. And when that was done, the initial approach to this was to be able to present to each eye separately an image that was taken by a camera that has two lenses, which are apart about as much as your two eyes are apart. And each of those created a separate image of what's out there. And, of course, each eye gets a very slightly different perspective of what's there. And then when you present these two images that you had collected separately to each eye, you get a very strong sense of real depth, as you will see in just a minute. Now another way to do it, which nowadays is easier because you can barely ever find even one of these two-lens cameras, even in stores that sell ancient materials, antique stores. So sometimes what you do instead, if you only want to take a picture of a static image, that you can take a camera, put in a track, and have it take two pictures in succession. And then you can do the same thing as you do with a serial camera. You can present one to each eye. OK, so, what we are going to do now, we are going to have a series of demos. And so we have a handout for each of you, the paper. And that you can keep and take home. But the stereoscope that I have for each of you, that you're going to have to leave behind, because I need to use that in other classes. So what I want you to do then, there are two pictures on the first page that you put the stereoscope down onto the page so that the vertical line cuts it in half so that one goes into each eye. And then you put your head right down to it to look into it, all right? And if you do that, if you have it properly sectioned, you're going to have a sense that that image is actually three dimensional. It's an ancient, ancient old picture on purpose. But you should still be able to see it in depth. So that's the initial thing. This became quite a parlor game and for that case, for many, many decades, whenever you went to a party, they would hand out to you a stereoscope, a handheld one, and they would show you all kinds of images. And you can even do this today when you get on the internet to find such displays. Now, then a very important discovery was made. I shouldn't say discovery, really, I should say an invention was made by Bela Julesz who came up with a so-called random dot stereograms. By the way, don't look at the bottom one, that just tells you what it's going to look like. There's nothing to look at the bottom, the bottom set. Now if you look in the middle set, that looks like a random dot stereogram. And the idea here was that the only cue that you provide is stereo cue, nothing else. It's pure. And so what can be done here, you can take a section here, or the same on each side, and simply move a few pixels, those images as a unit, over. And when you do that, they're going to stick out in depth. So now take you stereoscope and look in the middle display, and you look through it, you should see something sticking out in depth. And the first question I'm going to ask you is how many of you can see something stick out in depth? What do you see? You see AUDIENCE: A square. PROFESSOR: A little square sticking out? All right, so now, don't try to look at the bottom one. That simply tells you what the procedure was in that center section where you see the square sticking out. The pixels were moved a few steps inward from both the left and to the right creating what is called a disparity. And that's what the brain then can calculate for depth. So now to provide you with the acid test, go to the second page. Now you look at the second page, everybody see the letter on top? You don't even have to look through the stereoscope, obviously you see the letter E, right? That's because that section is made darker. But now if you do the same thing at the bottom, the only cue you have is the disparity cue. And the question comes up, what letter do you see there? And let me just add, this can be used as a quick, general test. You can present these two subjects, and if it can present a whole bunch of different letters, and if they can see the letters, that means they can see stereo. If they cannot see the letters, then it looks like they may not see stereo. Now let me add one other fact here. As you move these progressively closer to each other like this, you increase the disparity. And that causes the image to be seen at increasing depths. Our sensitivity is so great that when you look at this on a computer, a standard computer, if you move those images just one pixel from the left to the right, then from right to the left, you will see it in depth. And even monkeys can see as small step as one pixel. So now how many of you can tell me what was the letter at the bottom there? AUDIENCE: H. PROFESSOR: H, H, good. Anybody not being able to see the letter? Everybody sees it. Well, you guys are lucky. Because there are a significant number of people in the world who lack stereopsis, something like 5% to 10% of the population lacks stereopsis for a variety of reasons. We'll talk about that a bit more later. But one is sometimes you're born, and you're amblyopic in one eye. Sometimes you are strabismic, which means that your two eyes are not aligned, which in commonplace language is often called as being cross eyed or wall eyed. Those types of people very seldom will have stereoscopic depth perception, even after it's corrected, especially if the correction is made by the time you're 8 or 10 years old, the correction won't help. It has to be done much, much earlier. All right, so that then is the very, very basics of the stereo procedures. And now another procedure that had been developed more recently is one which is called the auto stereogram. So then if you go to the next page, and what you want to do is you want to look at this horizontally like that with the T on top. And then just look at it at sort of, I don't know, maybe about 20 inches from you, normal reading length. And what you want to do is to look beyond it. So stare beyond it. And if you keep doing that for a while, you will suddenly see an image, a three-dimensional image, as this comes actually from a book called The Magic Eye. There are several magic eye books in which all kinds of displays are done using these auto stereograms. Does everybody see-- who can see what's sticking out? OK, what do you see? AUDIENCE: Was it a shark? PROFESSOR: You see a shark? OK. Now let me see if any of you don't see it. Keep staring at it. Look beyond it. Another thing that helps, if you look at it, bring it a little closer to you so you can look beyond it easier, gradually move it back and forth. And if you're patient, eventually you may be able to do this. The reason this is difficult is because you have to uncouple the vergence in your two eyes. You have to look beyond it slightly. And, in fact, that is one of the reasons why testing people for stereopsis, an auto stereogram is not a very good procedure. Whereas virtually everybody can use a stereoscope without any trouble. AUDIENCE: That's so cool. PROFESSOR: Did you get it finally? AUDIENCE: Yeah, that's so cool. PROFESSOR: Yeah, all right so, if anybody is really interested in this auto stereograms is I say go to the store, the bookstore, and get one of those magic eye books. They're just a lot of fun. And you can just leaf through it. You don't even have to buy the book, just look through it at the store. And you'll see one interesting, clever image after the next. All right, so that's the stereoscope. And now let me explain to, I think I've mentioned this briefly before, the principles involved behind being able to see stereoscopic depth perception. And what I've mentioned to you before was that if you have the two eyes fixating at a particular distance, if you then draw a circle around that, that's sometimes called a Veith-Muller circle, or the sometimes called a horopter, then any target, like this one here, will hit equivalent points on the retinal surface of the left and right eyes. However, if you do the same thing, and you put a target either beyond or closer than the Vieth-Muller circle, then they're going to hit nonequivalent points on the retinal surface. So then by nonequivalency, we can do this and calculate this as to where the image falls relative to the central fixation spot in the foveola. And so then, when these nonequivalent points are hit, somehow the brain can measure this nonequivalence. And that is then converted into an estimate of where things are in depth. Now the idea behind this was that these nonequivalent points, that you have on the retinal surface, can connect in the cortex to single cells. So they have a cell in the cortex that is binocular, by virtue of the fact that inputs from the left and right eyes. But they don't necessarily have to come from equivalent points. They can come from nonequivalent points. And that may be then the mechanism whereby it can tell you the degree of nonequivalence, and, therefore, convert that into depth. And, therefore, there could be single neurons in the brain that are selective to certain depths. And so people began to do all kinds of experiments with this. And the way these experiments were done is you presented images separately to the left eyes and right eyes. And you could then present them to both eyes at the same time and vary the amount of disparity systematically to see what kind of tuning function you would get in the cortex. So this, some of the most beautiful work of this was done by a person called John [? Porgio ?] And I will tell you briefly about some of his experiments. So here we go with then. We're going to look at the neural responses, neural responses in V1 as initially done in the monkey. So here's an example of a cell. We have here different degrees of disparity. And we have the neuron responding each time they're four repeated trials. And you can see the action potentials by these dark lines here. And what you do is you move the stimuli back and forth across the two eyes, the way it's actually done, you have a mirror. And then you have two in this experiment. You have two monitors, one to the left and one to the right. And then you can set it up almost exactly the same as what you would do with a stereoscope. So this particular cell, as you can readily see, responds best when there's zero disparity. Now by contrast, here is a cell that responds vigorously at the far disparity and not to the close one. So then, when you do this, you can study hundreds of cells to see what kinds of distributions you have in the cortex for different degrees of disparity, cell activity. Now again, this hearkens back to what I talked about with respect to color vision where the question came up, if you want to see color, how many receptors would you need that peak at different wavelengths, right? And one idea was that maybe you need as many photo receptors as there are colors. But in the end, it turned out that we have only three of them. And on the basis of that, we can recreate all the colors out there. Now the same thing applies to stereopsis. So when this was done systematically, here's an example of the tuning functions. This one here is the same, very much the same, as the first figure I showed you. And so we have a bunch of different ones. And if you then study, as I've said hundreds of cells, you can come up with a distribution of this. And what John [? Porgio ?] came up with, he thought that there were four major classes. And the relative amount of activity of these four major classes then is used to compute all the very fine differences in depth. So there's a right on one. This cell is right on the fixation spot. And then you have near and far cells. And you have some in between cells. Initially he thought there were four classes. Some people argue that there may be as many as six. But at any rate, there's a limited number on the basis of which you can calculate almost an unlimited number of depths, which is quite remarkable. All right, so now, what you can next turn to is to ask the question to what degree do various extrastriate areas contribute to stereoscopic depth perception. And some people thought that this is a unique function for area MT, some people argue that maybe it's area V4, and so experiments were done in which it was examined to what degree stereoscopic depth deception is altered when you eliminate, say, area MT or you eliminate area V4. So that is what's been done. And you can think about it for a minute and say, well, what do you think? What do you think would happen in a monkey once you no longer had area V4? What you think would happen if the monkey no longer had area MT? Well, the results were actually quite surprising. And they're shown here, same experiment as before. The Bela Julesz random dot stereograms. And then you're presenting one of four locations of however many little area where, like little square, that's sticks out in depth and you vary the amount of depth that sticks out by varying the number of pixels you moved the images into this place. And when that was done systematically, this is what was found. It was found that neither V4 lesion nor an MT lesion cause a significant deficit. The only deficit that was significant had to do with a response latency. And as I should have mentioned earlier, like when we talked about the frog, one of the very important things about processing depth, again, is to be able to do it quickly. So when you have that's frog and the fly is flying along, he has to be very quick to compute it so that he can catch it, right, as you had seen. So in this case, what you see here, that there is about a 20 millisecond difference after V4 lesion, increase in latency, and quite a bit more, almost 40, 30, 40 after and MT lesion. So that contributes to some aspect of depth processing in terms of being able to do it quickly. But neither the MT or V4 are unique in processing stereopsis. It looks like that it is processed in several different areas in the brain and inspire conjoint computation that you can arrive at the actual depth. And it's by virtue of that joint computation that you can do this very quickly. So now you come to the next important depth cue, which is called motion parallax. This one is a capacity that we had acquired in the course of evolution, which is extremely potent and powerful. And it's based on a very simple physical fact. And the physical fact it that either when you are in motion, or something in the environment is in motion, the rate at which these images travel across the retinal surface is heavily distance dependent. And so let me demonstrate this concretely for you. Here we have an eye that's fixed. It's always looking straight ahead. And we have two rods here, sorry, just one rod, that we are going to move into position gradually from here to here, and then back up as shown by the arrows here. And we examine the range over which these near and far and middle objects move across the retinal surface when you engage in this motion and the eye is stable. And you can see that the far object moves over a much shorter distance than the near object. This you can readily do it yourself in an experiment. You can stick out your thumb and move your head back and forth. And you see that your thumb will move a lot more than the object that you're looking at. So now the same thing also applies when you actually are engaged in the eye movements, which of course you do all the time. In this case, they eye is set up so that it's fixated initially on this object and then tracks it to here and then tracks it back. And when you do that, you get the same kind of effect, namely that the distance over which a far and a near object move is quite different. The near object moves over much, much greater distance than the far object, even though the eyes are tracking. So this, then, being a basic physical fact, was then used in the course of evolution to create mechanisms that are sensitive to this differential motion. And, of course, because of the rate of motion also varies a little bit, it became possible to create mechanisms to make that computation to tell you where things are in depth. So here I'm going to show you an actual demo of this to make it clear to you. In this case, again, we have a bunch of random dots, much like in the Bela Julesz random dot stereograms but just a single one. And everybody agrees there's no depth here. Is there any depth? Do you see any depth? So now what I'm going to do is I'm going to set this image into rocking motion. And when I do this, almost instantly, you're going to see something in depth. Are you ready? So what you see here is are three levels, right, very clearly. In milliseconds, in 20 milliseconds, you can see this. And let me explain to you why you see this. If, OK, let me go back and do it again. If I keep this stable, you can see that the dots move over a great distance here, a lesser distance here, and practically not at all here. So there is a differential motion. And the greater the motion, the closer the image is in your analysis. So that's called motion parallax. And then what you can do actually, you can play all kinds of games, do experiments in which you can present this kind of image. You can put this into each of the eyes separately. And you can present this image alone or you can present it paired with disparity for stereopsis. And you can do each separately or you can do the two together. So let's now first summarize the essence of motion parallax. To derive depth information from motion parallax, neurons are needed that provide information about velocity and direction of motion and perhaps also about differential motion. Secondly, the majority of V1 cells are direction and velocity selective, as we had discussed before, and some appear also to be selective for differential motion, which I did not mention before. But, indeed, there are such cells in the visual cortex. Now, the third important point is that such cells that are motion selective and direction selective and selective for differential motion are very, very common area MT. So those are some of the very, very basic facts. And now we can move on and ask what kind of brain activation occurs by stereopsis and motion parallax in normal and serial blind subjects using a recently developed technique, which is magnetic resonance imaging, functional, functional magnetic resonance imaging. So how do you do this kind of stuff? Well, what you do here, here's an example, you have a very large stereoscope with a mirror at the end. And you have a subject who is lying down. And this whole unit, except not of course that part, is put into the magnet. And we have a magnet down here at MIT. Most of you probably have seen that. It's on the ground floor. So you can do this, and then you can present those images here. And so the stereoscope will present two images and then you can vary this by rocking him back and forth either to present only motion parallax or present only stereopsis and to present both. And so now the question is, this is a very primitive question at this stage, where in the brain are these processes analyzed? And so you can find out what brain areas are active by doing this repeatedly collecting the fMRI data and then printing them out and looking at them to see what happens. So I'm going to show you a couple examples of that. Here is the basic figure that the person sees but done in such a way that you can see it. Of course, he doesn't see anything like this. He just sees different depths. There are one, two, three, four, five, six, seven different depths here. And this rocks back and forth. And then you can, as I say, present this only with differential motion or you can present it only with disparity or you can present it with both. And then finally, as a control, what you do is you can do the same thing. But you don't have any depth of any sort. You just have a flat surface rocking back and forth. And then when you do data analysis, you actually subtract the last one from the rest of the data so that you're not looking at the data for the activation just by the spots but for the activation that's specific for stereopsis or motion parallax. So now if you do this experiment, here's an example of a normal subject and a stereobind subject. And we have here a sagittal cut adding up the images sideways. And what you see here, this is posterior cortex, of course. Here in the normal subject, when you present only motion parallax, you only analyze motion parallax. But you have a huge amount of activation in the visual areas. And then if you do the same thing when you do a binocular stereopsis only, you also get a great deal of activation, in quite similar set of areas. And then the big crucial test comes up. What happens if you present the stereo under monocular conditions when you don't see stereo? And if you do that, using this same calculation procedures, there is no brain activation here. And, therefore, what we see here is due, indeed, to the analysis that we do for stereopsis. Now if you do the same experiment in a stereoblind subject who has been tested, on tests similar we had shown you, when that person even looks at it under binocular conditions, there is no brain activation meaning that this person doesn't have any mechanisms in the brain to analyze stereopsis. Now the fortunate thing is that we have these several different mechanisms for depth perception. And so people who are stereoblind and have no analysis for disparity, they can still see depth reasonably well. And, indeed, they can get driver's license and all that, because we have all these other mechanisms that include, that we have talked about, motion parallax. So that then is one way of looking at it. Now the other way to look at it, especially when I ask as well, are the same brain areas doing both or what. And so what you can do is instead of doing a sagittal section, you can take sections coronally like bang, bang, bang, bang like that and see what that looks like. And here's an example in which I've isolated the stereo. And here are a bunch of sections. And this shows the activation for stereopsis. You can see there are all kinds of areas that are being activated. And then if you do the same thing and just look at the parallax alone, here we have the activation for that. And then lastly here, we do both of them. Now we can go back. The way to look at the question, how are these two areas, these two types of depth perception analyses, differently activating in the brain. And so I'm going to go back and forth between stereo and parallax. You can see, and that, you can see the difference. Now some regions, which have a perfect overlap, and there's some regions that are quite separate. Notable here are these areas here, which are activated by stereo but not by parallax. So this then can provide you with an initial idea that there are some brain areas in which both of these are analyzed together. And there's some brain areas in which are uniquely analyzed for either stereopsis or motion parallax alone. Now this tells you where it takes place in the brain. But how it takes place requires a totally different approach. Namely, the most comfortably to record from individual neurons in various areas just like I had shown you that nice work done by John [? Porgio ?] recording from the one demonstrating there that there are disparity selective neurons that are tuned that then provide the hardware, if you will, for being able to analyze stereoscopic depth. So that then summarizes what I wanted to tell you about motion parallax. And now we are going to go on and talk about yet another important depth cue that is utilized by the brain, which is called shading. Now remember that our ability to use light to illuminate things is something that was practically nonexistent for endless millions of years. And so because of that, both animals and us, we have to heavily rely on information based on light coming from the sun, coming from above. And shading is based on those millions of years of evolution utilizing the fact that most of the light that illuminates things comes from above. So there are all kinds of nice examples of this. And here is one of them. What you can do here is you can take a bunch of disks and set them up so. You can do this on a computer to make the upper part light and the lower part dark or the other way around, the upper part dark and lower part light. And all of you readily can see that these images, the first and third row, seem to be protruding towards you. And the images in the second and fourth row seem to be receding. Now that is because the brain is interpreting that on the basis of the fact that the light at least used to come predominantly from above. So that is the basic arrangement for seeing depth. And now I'm going to give you some demonstrations to indicate that this cue is actually quite powerful, even when you would not necessarily expect it to be. So these shading cues have also been extensively used in art work to provide an impression of depth. And I will show you some examples that will give you a sense of how that is done. So let me make one more point before I proceed that namely it is, indeed, the degree of illumination that's crucial here. We have the same change from red to, sorry, from some greenish to yellowish. And you have no sense of depth here whatsoever. In other words, you do need the shading information, meaning the amount of light that's being reflected from various surfaces, that is crucial for perceiving depth. So now what we are going to do is we are going to present a series of slides that will highlight the power that shading has for the perception of depth. So here is an example of how we do this. And the reason I'm showing this in some detail because if you really are interested in stuff like this, you can do all this on your own computer. You can play games, endless games with it. You can spend hours and hours having a lot of fun thinking about how depth works on the basis of shading. So what you have here are a whole bunch of disks. And each of these can be shaded differently by many different computer programs. So that's what you can do. That's the basics. So now what we can do is we can play a game. And we can say, present just two different objects here. But we're going to present them repeatedly on a big display. And then we can shade these differently as we please. So here is a whole bunch of them. And all the rows, this, the first, third, and so on rows are the same shape and the second, fourth and so on are the other shape, these two shapes. So we only have two shapes here that are juxtapositioned. Now what we can do is say, well, this is a peculiar sensation. I have a vague sense that there's something maybe in the third dimension. But it's not too well defined, because this is not in accordance with the rules and laws of shading of light coming from above. So now what we can do instead, we can selectively shade these to be in accordance with the rules of light coming from above to create shading and depth. And when you do that, here's an example of that. What you can see here is a very compelling image of these protruding elements, sort of protruding to the left, right? Everybody see, have a strong sense of depth here? So now what you can do is you can play with it and decide, well, can we do something that, keeping the very, very same shapes, shade them differently and see what it does to our perception of depth. And so what we are going to do next is we're going to take each of these elements here, the same ones here, and we're going to reverse the contrast. You see the contrast here on top is white and the bottom is black. So we're going to reverse that contrast. And when we do so, the question is what are you going to see. And if you do that, low and behold, you still have a strong sense of depth. But it's a very confusing sense. You may see sometimes these objects pointing to the left and sometimes to the right. It's unstable, because you're confusing those computations that have evolved over millions of years for interpreting depth in terms of shading. Now you can play also some additional games. You can make this even more complicated, make more changes, and here is another one. You still have a feeling of depth. But it's totally confusing. It's very hard. You can't organize it any way, because it is not in accordance with the law of light coming from above to a real object. And lastly, you can also make this so that it would be in accordance with the laws. But you can change it around so that you get a completely different perception, a strong sense of depth. It's still the very, very same elements that you had seen before. But now the shading is done, again, differently. And then now gives you, again, a unified sense of a display that is not conflicting. Because in this case, it's in accordance with that some of the basic principles of shading. So now, what we can do next, having talked about stereo and we have talked about shading, is to look at some more of the demos. So let's go back to the stereoscope. And let's go back to the handouts. And so if you now come to the next page that has a heading called stereo and shading. So, again, take the stereoscope and we are going to look at these in steps. So let's start by looking at the top display first, which is called stereo only. So if you look at that, first of all, if you just look at it without the stereoscope, you see pretty much a sort of flat display of a truncated pyramid. Then if you put the stereoscope there and look through it, you should see, if you look at it for a little while, that one of those sticks out towards you and the other one seems to recede. Does everybody see that? So let's stop there for a minute, because I want to add one more fact here, which I should have mentioned earlier. So what you do here is-- so you have these two displays like that. And you have one image here and another image here. When you-- this is greatly exaggerated, these coming together like this. That means it's going to stick out towards you as you look at it. But if you do the opposite like that, they're further apart than the rest of them. Then actually you see it receding. And that's why, if you now take away the stereoscope and look at it, you can see that the top left image is in each, or facing towards each other, whereas the other ones are facing away from each other. And that's what creates, that's what the brain interprets, as protruding versus receding using the stereoscope. So that is very similar, in a way, to what happens with shading. So now if you look at the second image there, first without the stereoscope, what you see here, again, is one that sticks out just like in the original display. And the rest of them are receding. That's because the shading, the one that sticks out is light on top and dark on the bottom. And it's the obverse for the other ones. Now if you do the same thing looking through the stereoscope, what you will see is still some degree of depth, but it's not very pronounced. Because there is no corresponding disparity information. But now if you look at the third display, where stereo and shading are in harmony, then what you see is an extremely compelling dramatic sense of depth with the top left one sticking out towards you and the other three receding. So shading appeared to have added to the compelling nature of the depth that you see through the stereoscope. Now the last image in here is that we put stereo and shading and conflict with each other. And when you do that, you can look at it, first with just one eye, then with the other eye. When you look at it with both of them, you, for a while, you see something unstable. And when you see it well, eventually, you realize that there is a conflict there because of the shading and the stereo being in opposition to each other. Now then, this kind of effect, if you go to the next page, we're going to go now to page five, six, and seven. Again, what you need to do here is to look at it sideways with the F's on top. And when you look at this, those of you who can use your eyes so that they are divergent, and you look beyond it, then this is very much like, or it is the same actually, as an auto stereogram. So if you look at this for a while, and you look beyond it, eventually it's going to gel. And when it gels, what you should see is where the F's are, the images are protruding towards you, and the others are receding. Now it may take you awhile. It's much more difficult than what we just did with the stereoscope. But you should be able to see that. How many of you are able to actually see those images? A little bit di-- move it back and forth a little bit slowly. And maybe eventually you manage it. So as I say, where the F's are you see these images-- these truncated pyramids protruding towards you. And the rest of them are receding. If you have difficulty seeing this, I'm not surprised, because it takes a lot of practice. But once you get a sense of it, I think that you will enjoy doing this and actually showing it to some of your friends. So then if you go to the next page, there we have added shading. And the shearing is the same everywhere. But the stereo cues are not. Once again, what happens is that they are stereo cues where the F's are stick out towards you much greater than the others. They stick out a lot less because of the added stereo. And then, in the last demo there, the last page, we are putting, just like in that figure with a stereoscope, we are putting them into opposition with each other. And so when you look at these, this would be very difficult to see for a while. Because there's a tendency to see it differently for stereo and for motion parallax. And so it's going to be an unstable percept. So what you can do then is you can play around with this at your leisure and especially once you become more proficient looking at all the stereograms, if you go and get one of these magic eye books to look at, you will be able to see these displays as well. So now, this is the one was the first one that I showed you. As I said, this one, with the F's, are the ones that should stick out closest to you. Once you see that, then you can go on to the next to add the shading or subtract the shading from it. So now, an interesting question that arises is to what degree are we able, or are animals able, to integrate these different kinds of depth cues. And in particular, in this case, you're going to ask the question, what about integrating stereopsis, parallax, and shading. So the experiment is one done on monkeys in which you can present these cues either singly or in combination. And we can ask the question, well, does the monkey do better with one or the other, or does he integrate really and does really much better when you provide all three cues. And so here is a procedure. What you do here, again, you have a rocking display like this, and you can present this either with shading as shown here or with motion parallax where it rocks back and forth, and lastly also with stereopsis. So if you do that, the results you get are quite dramatic. What happens is shown here as a percent correct performance and here is the latency in milliseconds. And it shows that the monkey does extremely well when you present-- this is percent correct. This is degrees of disparity. The monkey does extremely well when you present all three cues and does worse when you present each of those cues alone. Even more dramatic is the fact, and this is-- I keep coming back to this, that the ability for us to respond quickly to things is very important for survival. And here what we can see is that when you present all three cues, performance is much, much, much, much faster than when you present each of those alone. And, of course, as you might expect, when you present parallax only, because that's motion over time, that takes the longest to do. So even though motion parallax cues are great, it became important in the course of evolution to create mechanisms that can detect these things more quickly and more efficiently. So now we come to yet another cue that we know very little about at the level of the brain or single units, because it's so complicated, which is called perspective. But I want you to just be aware of it and have a sense of it. And here is one of those cartoon examples that gives you a very strong sense of depth. And you almost cringe. If you were there, you would worry that you would be falling down. This is done strictly by virtue of perspective. It's very similar to what you encounter all the time when you're driving down a road and the road seems to converge, even though you're not aware of it. But that's what's happening on the rental surface. Because things further away are smaller than things that are close by. And that's when you look down a railroad track, the same thing happens, even though you know that the railroad track is not converging, it's going parallel. But because of the distances involved, that's what falls on the retina. And you're smart enough to know that even though that's what falls on the retina, you can make the right kind of interpretation. Conversely, you can also compute the depth on the basis of that kind of convergence. Now here's another example of that, a much simpler way that people can do with experiments. Here we have a bunch of dots. And we have two basic cues that have to do with perspective. One of them is this gradually decreasing size of these dots. I should say elongated disks, if you will, and also that they are converging much like a railroad track converges. And so we have a very strong sense of having a third dimension here. Now the fact that this is so strong can be mitigated by adding a few things here. If you add some more dots, it's not question as dramatic. And then if you start mixing up the sizes, you are beginning to lose it. And then if you totally mix it up, then you have no sense of that left at all. So it is that progression of steps and sizes and whatnot that gives you the sense of the depth of the images that you're looking at. Now here's another converse example of this that is an illusory effect that what you see here is three barrels, if you will. And this barrel is a lot bigger than this barrel, right, or is it? Well, so what we're going to do here, we have an inducing element here by this hallway, if you will, with a door at the end. And we're going to remove this hallway keeping the barrels exactly as they are. And if you do that, low and behold, those barrels are all the same size. It's induced by virtue of the surround that gives you a false sense of depth. So now let me show you another picture because of the purpose behind this. This a picture that's in a museum in Worcester, Massachusetts. And it was created by a fellow called Edward Savage. And it's a pretty unpleasant picture. But the main reason I'm showing this to you is that there seems to be very poor sense of depth in this picture. Now the reason this is interesting is because when artists began, centuries ago in the 13th, 12th centuries, draw things, they did not have a concept of an understanding of how to create depth, a third dimension, in their drawings. So what they did eventually, they came up with a so-called vanishing point, and they drew very much like what we had here. Lines that converged at a point and then scaled the images accordingly rather than keeping it the same size. And that way you got a good sense of depth. So now that has a number of interesting stories about it that we are going to discuss next time you talk about the perception of shapes, OK, patterns. But I will leave that discussion until that. What I'm going to do next, however, I'm going to try to give you a sense of how important stereopsis can be for the perception of fine depths. And so to do that, what I'm going--I'm going to show you actually a film. And here what we have is a so-called needle test. What you have here is a fine needle protruding. And here we have a bunch of different size circular openings, a little bit like a needle, but it's round. And the task is to take these one at a time and hang them up. And one can time how quickly you can do that, or you can make a film to see how well you can do it. And then what we can do is we can test the subject under binocular conditions, and test them under monocular conditions. So I'm going to show you a film of this, actually two films. It will just take just few seconds to do it. OK, be ready, it's going to come up in a second. OK, here's the subject under binocular viewing conditions. So that's the condition under binocular viewing. And now I'm going to show it to you, same subject, same time, but with one eye closed off. So that then just even looking at it without taking any careful measurements. It's obvious that it's much, much more difficult to thread a needle under monocular than under binocular viewing conditions. And so what you can do is when you go home, and next time you want to sew something up, try threading the needle with one eye closed and with the two eyes open. And you will see immediately what a huge difference it is. And that difference, therefore, is due to you're having the mechanism of steropsis. Just a few seconds here. Another test that has been used in a similar fashion which allows you to actually calculate exactly what your error is in reaching, you can have a subject sit in front of one of these touch panels, and then do this experiment either binocularly or monocularly. And after he presses this a dot comes up, and then the person has to touch it. And you have about 30 or 40 trials like that. And then you have recorded where the person touched. And, therefore, you can calculate the error between where we touched and where the dot is. And then, again, you get a huge effect between monocular and binocular viewing conditions. Now when you come to monocular and binocular viewing conditions, another thing important to test is to what degree a person who does or does not have stereopsis is capable of integrating information between the two eyes. So to do that, we have here examples of what is called binocular integration. So what we do here, again, you can use a stereoscope. You look at a monitor. And this represent the left eye, this is the right eye. And you flash these on. If you integrate this, this is what you see. This would be actually what you would show in the control part of the experiment. So you see the Star of David. And if a subject is shown this, and they don't see the Star of David, you worry that their ability to integrate the information between the two eyes is deficient. And I would say 90% of the cases, those people who are deficient on this also show major deficiency in stereoscopic viewing. Now another way to do this is an experiment in which you can present two words here, sud and try, so the two are separate. And when you present them simultaneously, you actually see the word sturdy. So you ask the subject, please tell us what do you see. What is it word you see. And the subject says sud. And the subject says try, then you know that that subject sees, if he says try, he sees mostly with his right and prefers it, doesn't see too well with his left eye. If he says sturdy, then he integrates the two. And, therefore, you can safely say that this guy has very good integration between the two eyes. So what I would like to do next then is to provide you with any questions that you have. This was a complicated topic, that you have, and then we are going to summarize. Does anybody have a question about motion parallax, stereopsis, and so on? Let me maybe add one more important factor. Your eyes are separated only by so many centimeters. Now, can you think of an animal where there's a much larger separation? AUDIENCE: Hammerheads. PROFESSOR: The hammerhead shark. Yeah, that has a separation of over a foot between the two eyes. And so you could ask the question, why on earth did that animal evolve to such a huge separation between the two eyes? Well, that brings one to yet another interesting point. This I think may have started during the Second World War. It was realized that when you're flying over some territory, where there are all kinds of weapons and whatnot, which are well camouflaged, that just looking down at them, you can't see them. But obviously if you're going to have a tank or you're going to have a gun or other that may be more like a cannon, it sticks out of the ground. So it was discovered that if you had in your airplane two lenses which are far apart, that would greatly magnify the depth. You could defeat that camouflage. And you could find those weapons down there by virtue of the fact they're sticking out of the ground. So the fact then is that the more you separate the images from the two eyes, if you will, or your two cameras, the more likely it is that you can calculate the disparity of information between the two images. So that then is probably one of the reasons, not the sole reason, but maybe one of the reasons why, in some animals, is an excessive separation between the two eyes. And that brings me to get me to yet another point, which is that stereopsis actually works best at relatively short distances, like threading a needle. It doesn't work too well beyond, I don't know, 10 feet or so. It becomes progressively less effective. But at short distance, it's very effective. And so I presume also many animals that have to hunt for food are able to utilize the mechanism of stereopsis, because everything is at a close distance when they hunt for food on the ground. And by contrast, when you talk about motion parallax, that works extremely well over very long distances. So does anybody have any questions about motion parallax or about steropsis? Oh, once again, I'm crystal clear, huh? So, therefore, I think it's time for us to summarize what we had covered today. First of all, there are numerous mechanisms that have emerged for analyzing depth. And they include the ocular motor cues, which are vergence an accommodation and then the binocular cue of stereopsis and then the monocular cues of parallax shading and perspective. Then you have several cortical structures that process stereopsis. You don't have one specific brain area that uniquely does this. The number of disparities that are represented in the brain, as studies in the area of the one by John [? Porgio, ?] is limited. And maybe, maybe four, but there may be six, but certainly there are not a large number of them. And so it's analogous to the way things had been resolved for us to be able to process color. Utilizing motion parallax for depth processing necessitates neuron specific for direction, velocity, and differential velocity. Several areas getting V1 and MT process motion parallax, which I did not say. But indeed, if you make a lesion in area MT, you go get a deficit in motion parallax, even thought you don't get a major deficit in steropsis. Now, area MT combines the analysis of motion parallax, depth, and flicker. However, these analyses are also carried out by several other structures as I've already said. And lastly, little is know at present about the manner in which information about shading and prospective are analyzed in the brain. And hopefully, that will be one of the future tasks by neuroscientists. And so if any of you ever get involved in neuroscience, this certainly is a big open area that we hope people will start to analyze. So that then is the essence of what I wanted to cover today. And once again, if any of you has a question, please, please don't hesitate to ask. I'll be very happy to answer them. OK, lastly then, did everybody sign the attendance sheet? If not, please come up after the class and sign your name to it. Very good. So next time then, we are going to talk about pattern perception. And hopefully you will find that also interesting.
MIT_904_Sensory_Systems_Fall_2013
5_The_Midget_and_Parasol_systems.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today, we are going to talk about the midget and parasol channels. I will begin by reminding you of what these two channels are that I very briefly talked about before. Namely, if you look at the Golgi stain of cross sections of the retina, when you look at just the ganglion cells, it was discovered initially by the inventors, if you will, of the Golgi stain, namely, Golgi himself and Cajal. Cajal did this, initially. And then, subsequently, a fellow whose name is down here, Polyak, has used the same technique to study the retina. And what he discovered was-- not Polyak initially, but Cajal initially discovered, and then was verified by Polyak using these anatomical techniques, namely, the Golgi stain-- that there's one class of cells that have very small dendritic arbors, as you can see here, and another set that has much, much larger arbors. Now, once this was discovered, subsequently when it became possible to record from individual cells and to study their receptive fields and their basic organization, and also to look at them from a different point of view, it was, first of all, shown that these are two very distinct classes of cells in terms of the dendritic arbors they create. And as I mentioned to you before, these cells are called parasol cells because the dendritic arbors look like an umbrella. And at comparable eccentricities, the midget cells are three times smaller in diameter than are the so-called parasol cells. And then I showed you some data the last time indicating that these two types of cells conduct at different velocities to the central nervous system, with these bigger cells having bigger axons conducting significantly faster than the midget cells, and that you get a distribution when you record from all the axons in the retinal geniculate pathway that they form separate populations because, initially, the argument was that maybe this is just a continuous population of cells. But it turns out these are indeed two very distinct classes on the anatomical level as well as the conduction velocity level. Then when it became possible to record from single neurons and to study their receptive fields, it was shown-- I showed you this picture before as well-- that you have, first of all, center-surround organization as discovered by Kuffler; and secondly, that the midget and parasol cells are very different in size as far as the receptive fields are concerned as well as the dendritic arbors; and that the midget cells in central retina receive an input from just a single cone, whereas the parasol cells got a multiple input. And another distinction that was, at that time, made is that the neuronal responses are such that midget cells respond in a much more sustained fashion when you activate the center mechanism than do the parasol cells. So these respond much more transiently. So this, then, were initial cues as to what might be involved as to why we have these two systems. And there were all kinds of hypotheses. And then, eventually, in the '60s and '70s and especially in the '80s and '90s, all kinds of experiments were carried out to try to establish just why these two separate systems have evolved. So now, when the anatomical work was further progressing, it was found, as I have mentioned also the last time to you, that each cone in the retina-- certainly true for the red and the green cones, which would be equivalent to the midget system-- gives rise to an ON and an OFF bipolar cell. Those are the ones we discussed the last time. What I did not discuss the last time, and I'm not going to discuss today either but will the next time we talk about color perception, is that the blue cones-- how many of you remember what is the frequency of blue cones in the retina? AUDIENCE: 1 in 8. PROFESSOR: 1 in 8. Very good. Excellent. So that being the case, that already makes them different because the numerosity's so much lower. And then people have done all kinds of recordings, and to this day, it's not quite clear what the connection or pattern is. But as I said, next time, we will talk about it in some detail when we talk about color. But here, don't get frightened, we will discuss this the next time. But at any rate, we have the blue cones. And when we talk about color opponency that I will explain the next time, we think of the opponency to blue being yellow, and the opponency to red being green. And so the assumption here was made that the connections in this case, somehow, must take place to create color opponency. And since we have only three kinds of cones in the retina, in the primate-- red, green, and blue-- the opponency here must, somehow, involve both the red and green cones. So that's the complication that we will discuss the next time. And so the assumption was made that we have some so-called yellow/blue cells and blue/yellow cells. And so I just want to keep you puzzled as to what this means, especially what it means with respect to how we can see colors. And that's what we will talk about the next time. So now, here we have, by contrast-- and we go back here-- here, this is, then, the midget system. And I showed you the two extremes, which are very clear, and in between, which would be somehow creating the blue system. So that's the midget system with very small receptive fields and very small cells. Now, by contrast, when you look at the parasol system, what you see here is, as indicated by the picture I showed you just a minute ago, is that the receptive fields are bigger. The ganglion cell's a lot bigger. And the dendritic arbors are much more extensive. Now, that means, by the way-- I shouldn't even say "by the way"-- that means, very importantly, that the ON and OFF bipolar cells that connect with the ON and OFF cells of the parasol system are also much bigger and have much more extensive arbors, dendritic arbors, because they sample many cells instead of just one. But overall, what this means is that when you look at the bipolar cells in the retina, there are about three times as many bipolar cells than there are photoreceptors. Now, if you think about this-- somebody remembered the last time-- that in the retina, we have about 50 million cones, and we have about 120 to 150 rods. And so, then, if you think about the fact, at least, for the cone system, we have three times as many bipolar cells. This only applies fully to the cone system. That still means that we have 150 million bipolar cells in the retina. So it's absolutely incredible, then, this tiny little thing that you have in your head, which is less than an inch in diameter, you have these millions and millions and millions of cells. Amazing. So this is then the arrangement for the parasol system. And now, we can progress and make another point about how clever the wiring became in the course of evolution. The cleverness here is that, as you know, we only have a single layer of photoreceptors, which, outside the fovea, has a mix of rods and cones. But then if you look at the receptive fields of ganglion cells, what you find is that they have overlapping receptive fields for many of the attributes. So you have overlapping for ON and OFF, of course. And that's obvious from the wiring. But you also have overlap for the midget and the parasol cells. And lastly, there's an overlap-- even if you just look at the rod input-- which results in realizing that the receptive fields become larger during dark adaptation by virtue of the connections of the rod photoreceptors to the rod bipolars than to the A, to amacrine cells, which then connect with the same ganglion cells as do the cones. So this is, then, an incredibly clever arrangement which enables you to see things extremely well. And it's an incredible feat of wiring that accomplish this incredible arrangement. So now, another important thing to consider, in addition to what I've said so far about the midget and the parasol cells, is that they have distribution over space. If you look at it from, say, from the center of the eye, meaning the fovea, going to the periphery, the ratio of midget and parasol cells changes dramatically. In the center here, in the fovea, you have a huge difference in number of cells. But as you get to the periphery, these two types of cells eventually become equally numerous. Now, to show this in relationship to the lateral geniculate nucleus-- again, to repeat, in the fovea, you have an 8 to 1 ratio; in the periphery, they have 1 to 1 ratio. And this is directly reflected in the lateral geniculate nucleus. Everything I've shown you so far about the geniculate was a six-layered structure. But those six layers exist only to about an 18-degree eccentricity from the fovea. And after that, the lateral geniculate nucleus becomes a four-layered structure. And as done here in a schematic fashion, the four layers for the left and right eyes are pretty much equal in numerosity. And that creates, then, the 1 to 1 ratio. So that makes one think, why do we have this shift? There must be something very important in central vision for which the midget system is good for. And in peripheral retina, the parasol system becomes more important. So we are going to pay attention to that as we examine what the functions are of these two systems. So how do we go about finding out the functions of these two systems? Well, there are several methods. The first one I'm going to tell you about is to record from individual neurons-- say, in the cortex-- and determine what kinds of inputs they get from the midget and the parasol cells. So to do that, you first want to look at the exact projections of the retinal ganglion cells again. Just to remind you, here's the lateral geniculate nucleus, again, in the region where there are six layers. I already showed you this picture before. And I told you that the parvocellular layers project to 4C alpha; the magnocellular layers to 4C beta; and those other cells, which we'll talk about eventually, its lateral layers project to the upper parts of the cortex. So now, we have a good idea that even if the inputs to the visual cortex-- meaning, in this case, of course, the V1-- the two systems are separate in the input layers, in 4C alpha and beta. And then the question comes up, well, what happens when you look at cells above and below the input layers? Do they converge? Or what's going on? So that's the kind of questions we're going to ask. And we are going to look at this more carefully here to find out just how do you do an experiment like that. Well, one thing you can do here, you can use a reversible agent that you can inject either in the parvocellular or magnocellular layers of the geniculate. And if you do that, if in the parvocellular layers, this would be a blocking agent. Xylocaine is something that's used frequently. There are several other agents. And so if you inject that substance in here, you render the cells in the geniculate unresponsive. It's to serve the same idea as what we talked about with APB in the last time, except this is not quite as neat because it's not a pure chemical treatment like you did with APB. Secondly, you can do it in the magnocellular layers as well and block that region. Now, if you do this kind of experiment, let's first talk about what happens in area of V1. Now, this is a very difficult and complicated experiment. Sometimes, you can spend days recording from a single animal because, first of all, what you have to do is you have to put in electrodes into the lateral geniculate nucleus, which is way down, preferably into both regions, but there are many experiments just to one or the other. And then put an electrode into V1. Now, the next important task, of course, is that you've got to record from V1 in the region to which these cells or these cells project. So to find that overlap in the receptive fields that you need to do when you record here, here, and here is that you have to take many, many electrode penetrations until, finally, you have an overlap. And once you have that overlap, you can be sure that when you inactivate this region by injecting the Xylocaine, you can assess what the responses are in the V1 before, during, and after the injection. And most importantly, before and during, naturally. So let's look at that. This has been done studying many cells to get an overview of what's going on. And I'm going to show you, initially, just one example of a cell to give you a feel for what that's like. So here we have the cell respond. What you do is you take a bar of light and move it across the receptive field-- brrp, brrp-- for each edge. So those are the two cumulative responses. [INAUDIBLE] That's the normal. Now, you inject into the parvocellular portions of the geniculate. And lo and behold, the cell keeps responding. Then you inject into the magno portions. And then lo and behold, the cell still keeps responding. But when you inject into both-- bango! There's no response. So that means that this particular cell gets a convergent input from the midget and the parasol cells as they pass through the lateral geniculate nucleus. Now, if one then does a systematic study and then records from many cells, what has been found here is that some cells do get an exclusive input from the midget cells. Some get an exclusive input from the parasol cells. And some are just like the one I've shown you here-- namely, they get a convergent input. So that tells us, then, what the very, very basic nature is of the input to the visual cortex, you, to some degree, keep separate the two systems. And to some degree, you also have a condition where they are united. So that's what happens in area V1. So now, the next thing you are going to ask is, well, what happens in other cortical areas? So here, then, I'm showing you the same method picture. But now, you have recording in V4 and MT, doing the similar kinds of injections that I just described in the lateral geniculate nucleus. So when one does this, some interesting results had been obtained. And as always, before this kind of experiment that actually tested this question, there were, of course, hypotheses. Some people hypothesized that area of V4, which had at one time been proclaimed to be a color area-- and we'll come back to that shortly-- it was claimed, therefore, that this area gets input only from the midget system. And this area-- since it has motion-selective cells, if you remember-- only gets input from the parasol cells. So, therefore, we can now have the acid test, thanks to some remarkable work that had been done, much of it by John Maunsell over at Harvard, who had actually worked here at MIT before then. And he did an experiment like that. And here's an example of a single cell recording from V4. This is a magno block, and this is the parvo block. This is before the block, and this is after the block. So what you can see here, this cell, again, response to a bar moving across-- brrp, brrp-- like that. Here are the two responses shown here as to how they come across the receptive field. And you can see a vigorous response after you do this and before you inject Xylocaine into the geniculate to block, in this case, the magno system; and in this case, the parvo system. This is backwards from the way I usually presented. I usually like to put parvo first. Preference-- it's the bias. At any rate, you can see what happens is dramatic for this particular cell after you injected, you blocked the magno system. The cell-- we said has spontaneous activity-- but it no longer responds to the edges. By contrast, when you do a parvo block, you only get a small effect. There is a reduction for this particular cell, but it's still responding. So that is the example of a recording in area of V4. Now, let us go and ask the question, what about MT? Now, here's an example. Same arrangement-- magno first, parvo second, before the block, after the block. In this case, in MT, you get a cell that totally stopped responding after magno block to this moving stimulus. And here, what you see is after parvo block, the response continues. So now, this is just two cells. Now, to be sure that these two cells are generally representative of these observations, what you need to do is to collect this kind of information from many cells, and then come up with a qualitative statement. So what you can do is you can, for each cell, when you do this experiment, you can't determine how much the cell fired here to the stimulus and how much it fired here. And then you can get a ratio of that response. You can turn it into, like, a percentage or maybe just to score them as 0 to 1 number. And then if you do that for a whole bunch of cells, what you find is shown here. This is a bunch of cells in V4. This is a bunch of cells in MT. And what you can see here is that in V4, first of all, you have a medium degree of blockage, nothing major, but some blockage. But most importantly, you get blockage both for parvocellular and magnocellular inactivation. By contrast, in MT, what you find is that when you block magnocellular geniculate, most of the cells are dramatically affected. Only a few cells are blocked as a result of a parvocellular block. So the outcome of this is that the hypothesis pertaining to MT, that it gets mostly an input from the parasol system, is correct. But the idea that V4 gets an input only from the midget system is obviously incorrect, showing that it gets an input from both. So to then summarize the wiring diagram here, what you have here is the eye, of course. And then you have the midget and parasol cells that project, respectively, to the parvocellular and magnocellular layers of the lateral geniculate nucleus, then they go off to the cortex. And if you remember, in the cortex, they terminated 4C alpha and beta. And then, also, as I told you, up here, you already have many cells above and below the input layers that get a convergent input from the midget and parasol cells. So what happens then, in terms of the projections to higher cortical areas-- to talk about V2, for example-- you get some cells which are purely driven by the midget system, some cells purely driven by the parasol, and many cells that are driven by both. So V2 becomes very complicated because they have different subdivisions. Maybe in V2-- we talked about it a little bit-- that may be receiving different input from area of V1 in terms of whether they are driven by the midget or the parasol cells. Now, most notably, when from V2 and from V1, you look at the projections to area MT, the middle temporal area, what you find is that this is heavily dominated by the input from the parasol cells. Then if you go beyond that, this continues to the parietal lobe from MT. But then, when you go from V2 to V4, then temporal, then the frontal lobe, what you find that there's a mix of inputs from both of these systems. So it highlights the fact that, indeed, these things are quite complicated. And so we need to now turn to a different method to try to ferret out beyond just establishing the connections as to what on earth these two systems are for. And needless to say, hypotheses were rampant about this. And what I'm going to do now, I'm going to tell you about how one can go about and how many investigators have gone about trying to determine what the functions are of these two systems in processing visual information. So how do you do that? Well, the way you do that is you can use what is called lesion studies. What you can do is you can selectively block either the parvocellular or magnocellular systems at the level of lateral geniculate nucleus because, by lucky happenstance, the parvocellular layer is getting input from midget, and the magnocellular layer is from the from the parasol cells. So you can make lesions, then. How do you make lesions? There are variety of ways of making lesions. That's a huge field. It applies not only to vision, to many, many other areas, to try to make carefully selective lesions in various parts of the brain. Now, it's not easy to do that in the geniculate. So what you have to do is, again, you go in to the lateral geniculate nucleus with a microelectrode. Once you do that-- you already know the layout of the geniculate-- you can find out where the receptive fields are located, and you can determine whether you are recording from the parvocellular or magnocellular layers. You're going to adjust the depth of the electrode to be either in the parvocellular or magnocellular layers. And once you have established the receptive field location and were certain about in what parts of the geniculate you are recording, you can then proceed to make a lesion. Now, there are a number of ways of making lesions. One of them is called heat lesions. You should take a metal microelectrode. You take it down there, and then you pass some current to make the tip of the electrode hot, you can affect, as you know, maybe a millimeter area or something like that in the lateral geniculate nucleus. I'm talking about small, small areas. Another alternative that you can use is to inject a chemical. One of those commonly used is called Ibotenic acid, which is a very nice attribute, that when it causes a lesion, the borders are nicely clearly defined. So no matter how you do this, then once it is done in a monkey, then you can study a monkey for months on end to see what the vision is, of how the vision is affected. And then after you've done that, you then process the brain and look at the lateral geniculate nucleus to see what was the size and location of the lesion. Now then, in some cases, it's "Oh, my god, you did both. Nyah, nyah, nyah." You screwed up. So months and months of work goes down the toilet. But in some cases, you get a good effect. And that, eventually, if you do it several times on several monkeys can result in a solid publication. So that is the basic process for making the lesions. Now, let me then move on and broach the next important topic-- namely, what are you going to study? So what you have to study since you want to have an open mind and you don't want to say, "oh, yeah, everything in the brain does color" or something like that, and so instead of just studying color, you have to study many other aspects. So I'm going to tell you about several kinds of tests that have the newest. And I will explain each of those to you. But first, I will list them. The behavioral task that is used-- very important-- is that you've got to be able to confine the crucial stimulus either to the area that you had blocked or the area that is intact. So one way to do that, which is the easiest way, is to use a detection task. We already talked about that briefly. The monkey first fixates. That confines his looking to that location. And then you can present a stimulus like this. And if he makes a saccade to it, he gets a drop of apple juice for a reward. Now, that is a procedure. And then, of course, our next trial, it appears someplace else. On each trial, it's in different locations. And on some trials, you can present the stimulus in the area that had been blocked. You get it from by magnocellular or parvocellular lesions or infusions or in an intact area. And that, then, enables you to compare performance in regions where the monkey's performance is normal because he's intact and in those regions where he lacks either an input from the magnocellular or from the parvocellular layers of the lateral geniculate nucleus, meaning from the midget and parasol cells. So now, another task which is used to be able to carry out a more thorough examination of visual capacities is his discrimination. In which case, after fixation, you have a whole bunch of stimuli coming on. This is very effective for studying color. And in this case, this discrimination task often is called the so-called oddities task. That's an easy way to remember it. "Oddity" because this is odd. All the others, the distractors, if you will, are the same. Only one stimulus is different. And, of course, the monkey has to make a direct saccade to this stimulus to get a drop of apple juice for reward. So that would be a so-called discrimination task. So these are the two basic tasks that can be used. And now, we can proceed and ask the question of what exactly, what kinds of visual capacities should be studied? So let me just say one more thing, by the way. The way this is done in a laboratory is that you have a performance monitor, so-called, where the computer puts a square around each of the dots that will have appeared-- in this case, this should be discrimination-- or where they will appear singly in the detection task. And if the monkey makes a saccade that lends the saccadic eye movement, which you are recording, into the square area there, then automatically a drop of apple juice is discharged. But if he were to make a similar saccade to here, he will not get rewarded because this is the only correct position. And on each trial, that's going to be someplace else. Everybody understands the method? It's fairly straightforward. So now, we can move on and look at an example of the kinds of lesions that this can create. Here is a lateral geniculate nucleus again, the six-layered portion. And you can see this area here, that this is a lesion. There are no cells functional here, which affects only the parvocellular layers. You see here the two magnocellular layers are normal. So this is the lesion area which affects a few degrees of visual angle-- in this case, in the lower part of the visual field. Now, here we have a magno lesion. If you just look at this quickly, you say, what lesion are you talking about? But if you look closely, can you see this here? There are no cells here. So this is the region where both layers, magnocellular layers that get the input from the parasol cells, had been blocked. So that is the block there. So this is, then, an example of a successful lesion. And once you do these kinds of experiments-- some of these experiments can take a couple of years, maybe longer-- you end up having some monkeys who have neat lesions like this, then it's a control. Let me just add that, which I'm not going to show any pictures of. But you can also make, on purpose, a lesion where you block both of them-- both the magno and the parvocellular layers. You just block out a portion of the lateral geniculate completely. Now, that's an important control. For all kinds of experiments, controls are essential. Now, let me tell you-- this is available in the published material that you have been asked to read-- that when you block both magno and parvo, maybe a quarter of the lateral geniculate nucleus, whenever you present stimuli in the area that has a topographic correspondence to the lesion's region, the monkey cannot see a thing. He cannot perform any of those tasks at all. So clearly, whatever effect one gets here with these selective lesions can then be ascribed-- if you see deficit after this one or deficit after that one-- can be ascribed to what the midget and the parasol cell systems do. So that, then, is the procedure. And now, we can move on and list the perceptual functions that one wants to test. Now, think about it for a minute, just quickly. What kinds of functions would I want to test if I'm running this experiment? How can we break down the multitude of things that we have to process into some basic functions? Well, first of all, very important function, even though it may have now [INAUDIBLE] is called contrast sensitivity. Because everything you look at, the light reflects just about from everything that you look at, and so the contrast of the stimuli that's on a white sheet of paper or a gray sheet of paper or when you look at photographs is highly varied. And the question is, how well can you see the different levels of contrast? Another one, of course, that's obvious is color. How well can you process color information? And then another one is pattern. We often talk about basic patterns. And the one I'm going to tell you about mostly would be checkerboards or something like that. Then we talk about texture. Most of the things that we encounter in the world are textured. And so it's important for us to be able to see texture. Then, of course, shape, that's obvious. Then stereopsis. We will talk about stereopsis in much more detail in a bit. What stereopsis involves, as I've mentioned to you already, are the difference of the input in the two eyes. And that difference is called disparity, which the brain then interprets as depth. Another one, very important because you often just see things appearing quickly in very brief times, is to study flicker. And yet another one is to study motion because that's so central to our existence. And then to study brightness. And one thing I haven't mentioned yet is scotopic vision. How well can you see on the photopic and scotopic conditions? So those are, then, the procedures. And so let us now look at the first one of these-- contrast sensitivity. This has been extensively studied, hundreds and hundreds of papers, many of them in humans. And most commonly in these papers, what they did was they used sinusoidal gratings. And the way you do that, then, in similar experiments as a detection experiment, you present sinusoidal gratings with spatial frequency and whose contrast you systematically vary. And when you do that, you get what is called a contrast sensitivity function. And so the most common one that you read in many papers is plotted this way. This is spatial frequency low, high. And this is contrast. Low, high. And then if you do that, then you systematically studied this in humans, interestingly enough, you get a function like that-- meaning that in between levels of spatial frequency, you see the best. And extreme levels, you don't see quite as well. So one could use sinusoidal gratings in animals. But sinusoidal gratings aren't essential. What you can do instead is you can present a checkerboard. And so here's an example, same procedure as before. The monkey sees this and makes a saccade to it. It gets you water. And then you vary the spatial frequency and the contrast of it. So here is one. You can hardly see that because the contrast is low. So now, to see this overall, here's an example. Here, we vary the contrast. And here, we vary the spatial frequency. And if you look at that, you can see that in this region, depending on how far back you are, this region, you can see the best. Here, it will be less so. And here, of course, it drops off dramatically, just like that curve I have drawn there. So this, then, enables one to generate a so-called "contrast sensitivity" function in a monkey in those regions of the visual field that are intact and in those regions of visual field in which you have either magnocellular or parvocellular lesion that selectively blocks the parasol and the midget systems. So that's the experiment, then, for just studying contrast sensitivity. And if you do that, this is the kind of result you get. This is the monkey's normal performance. In this case, of four spatial frequency levels. And you go up and down with contrast, just like that curve I drew there. So this is your contrast sensitivity function. And it shows that under normal conditions and after magnocellular lesion that blocks the parasol system, there is no effect, meaning that the parasol system doesn't seem to be too important for contrast sensitivity. By contrast, there's a huge effect, especially at high spatial frequencies, after a parvocellular lesion that blocks the midget system. So that, in essence then, is what happens with contrast sensitivity. Now, let us move on and say, well, what about color vision? So how do you do the color vision? I already told you that before. What you do is you present, in this case, eight stimuli, one of which is different from the others, and just have red and green ones. This is the odd one. The monkey makes a saccade to it. He gets a drop of apple juice for a reward. Now, if you want to be systematic about it, you can vary the degree of color contrast. But the effect is so dramatic that it was not necessary to do that. So let me show you what the effect was. Here, we have a monkey's normal performance when the test element is blue, red, and green. This happens after a parvocellular lesion, meaning when you block the midget system, the monkey cannot see colors at all. Just a total loss. Whereas, after magnocellular lesion, his performance is indistinguishable from normal. So this, then, establishes the fact that color vision is controlled by the midget system. Now, the fact that this is the case perhaps is not that surprising because I told you that when you look at the cells in the midget system in central vision, that most of them get an input from just-- at least, the red and green ones-- get an input from a single cone. So just looking at the receptive field organization tells you that that system is likely to be very important for color processing. So that, then, is confirmed by this kind behavioral experiment. So now, let us look at brightness perception. Now, how is that different from contrast sensitivity? It's different because, in this case, what to do then is you use a discrimination task like this. And I think most of you can tell that this one is brighter than the others. Purposefully, I made that a small difference, so you can appreciate the fact that on each trial, we can vary the difference between the distractors and the target. And you can generate a curve seeing how much brightness difference do you need to be able to perceive a brightness difference. So if you do that kind of experiment, one is in for a big surprise. This shows here what happens after a parvocellular lesion. And this shows here what happens after a magnocellular lesion. So when you block the midget system, the performance is unaffected as is the case with a magno lesion, meaning that if you, obviously, that both the midget system and the parasol system process information about brightness, at least, at these low spatial frequencies that we have used that I just showed you. So now, the other surprise was that, then, the question became, well, what if you do this not under photopic conditions and do under scotopic conditions? And again, there's no effect, meaning that the unique inputs from the rods and the unique inputs from the cones must go into both of the systems. Now, the reason that was surprising is because a couple of papers have been published, maybe about 15 to 20 years ago, that claimed that the rods feed selectively into the parasol system, not into the midget system. So this totally disprove that. And then subsequently, careful anatomical experiments also established that both the small and the large cells, magno ganglion cells, receive convergent input from the rods and the cones as I had diagrammed to you in the previous session. So then, let's go on and look at pattern and texture perception. In this case, let me show you the kind of experiment that's done. This is when you look at patterns. One way to do it, go back to those same checkerboards, but make them high contrast and have one at a different spatial frequency than the others. And then you can systematically vary the degree of spatial frequency difference between the targets and the distractors. So that would be the method that is used to study this kind of pattern perception. The other one is to look at textures. And in this case, those of you in the back probably can't see this. But those of you up front, can you see this little area here, where the diagonal lines are reversed? So what happens is, first, you just present this whole thing. And then you present the reverse patterns and the surround. And when you do that, the monkey has to make a saccade to that location and will get a drop of apple juice again for that performance. And then on each trial, this appears somewhere else in one of the four to eight locations in the display. So that's the procedure. And then if you do this, you, again, luckily get a very dramatic effect, apparently. Here it is. We have here normal performance. Here's a parvocellular lesion, magnocellular lesion. This is texture, and this is pattern. Pattern tasks was, overall, much more difficult than the texture one in this case. Well, what you see here that is really dramatic is that when you block the midget system, there's a tremendous loss in your ability to see patterns and in your ability to see textures. So fine vision for detail seems to be central for the processing of the midget system. So that, then, is the effect that one gets with texture and with pattern. So now, let's move on and talk about stereoscopic depth perception. It's a topic we are going to look at in more detail in a later session. Let me now introduce this, first of all, by telling you that stereoscopic depth perception resulted predominately by virtue of the fact that the eye is moving to the front so that there was a major binocular overlap. And if you talk about a monkey, for example, or even many, many animals that do have stereoscopic vision, here's an example of looking at a tree when just about the only cue you have here would be based on stereopsis because all of them are equally dark. Which branch is in front of which one? And it's very hard to tell. If your monkey will jump from this branch to this branch, if he can't tell where they are relative to each other, the monkey is going to fall down and drop dread. So it is very important for monkeys to have a highly functional stereoscopic system, which they do. And so, how do we study this? Well, the way we study this is to use what is called a random-dot stereogram. Random-dot stereograms were created once computers became a reality many, many years back by Bela Julesz. And he came up with the idea that if you use this kinds of random-dot stereograms, there's no other depth cue. So, in other words, you can study just the stereoscopic depth perception aspect of it. Now, why is this? Because what you do here is you look at this two displays, you present this to the left eye and this to the right eye. The way this is done is that you use what is called a stereoscope. I bet you most of you have seen a stereoscope. Everybody see a stereoscope? No? So a stereoscope used to be something that was extremely popular starting in the 19th century. They created these handheld devices with the two lenses that you look through. And then they created a camera that had two lenses in it at the same distance, roughly, as your two eyes. And so it took a picture, thereby, creating two pieces, one by each lens. And then, what you did, since the two have a slightly different perspective of what you're looking at, then the photographs are put into the stereoscope to look at. These two images are fused then. Looks like a single image. And due to their very disparities, you see real depth. It's really dramatic. And when we talk about stereoscopic depth perception, I'm going to actually bring some stereoscopes here and some examples so you can see exactly what that is like. But at any rate, in this case, what you can do is you can take a little area here just like before. And you can take the dots in this area. You move them a little bit this way, a few pixels. And you move this a bit this way, that way. And then if it's a square area, you're going to see a square sticking out. And the monkey will then make a saccade to that. But with one eye, he cannot do anything. He can't see a thing because these random-dot stereograms only give you information about disparity. So if you do that, you're going to have the monkey make a saccade to these. You can vary the degree of disparity. And when you do that, what you find is, quite dramatically, is that after a parvo lesion, when you block the midget system, there's a tremendous loss. The monkey essentially, especially at smaller disparities, has totally lost his ability to use stereoscopic depth information. So there is no deficit, however, after a magnocellular lesion. So therefore, you can say safely that, especially at smaller disparities, what you have is the midget system that performs this remarkable task of seeing things in depth by virtue of disparity in the two eyes produced by stereoscopic vision. So that's really a very dramatic effect. Then the next thing we can look at is motion perception. Again, this can be done in various ways, but I will just show you an example of it. So the monkey first fixates. Again, use random-dot stereograms here. And then you set in motion a small square area. Ready? I hope. There we go. Or you can do the high spatial frequency like that. And then, of course, on each trial, that appears someplace else. And you can vary the velocity or you can vary the contrast of the display and see how the monkey performs in the intact portions of visual field and how it performs in those portions of the visual field that are blocked either for the midget system or for the parasol system. Clear? So let's think about it for a minute. What do you think you're going to get? Here we go. Here we are-- motion detection. This is a parvo lesion. This is a magno lesion. And so what you get is a dramatic deficit, not an all-amount deficit, but a dramatic deficit in seeing motion after the magno lesion but not after parvo lesion. So this says that, indeed, the parasol system plays a very important role in motion perception, not an exclusive role. There is still some performance there at very, very low contrast here. And there's a big effect at very high contrast, and monkey does better. So we can conclude that the parasol system is very important for motion perception. So now, the next thing we can look at is flicker. And in this case, when you study flicker, instead of using a monitor, people used LEDs. Now, why do you think that is? Why don't they just use monitors? Well, I mean, the reason for that is very straightforward, actually. When you use a regular monitor, what is the frequency of a monitor? It's the same as the frequency of the alternating current that you have, that shine the lights up here. But what is that in the United States? 60 hertz. And so that's what happens on a regular monitor. And every 1 over 60 hertz is you shift the image. In Europe, the frequency is actually 50 hertz. So at any rate, because of that, there's a rather limited range over which you can vary the ON and OFF activity of a flickering spot. So if you use any of this, however, you can use it in a huge, huge range of small steps. So, therefore, this is what the display looks like that people use to study this. First, the monkey fixates here. And then one of these LEDs will start flickering. And I'll show you the flickering, which is not perfect here on this monitor, but it will give you a sense. Everybody see that flicker? Now, the mean flicker value of that location is the same as the yellow lights that you see there. So if it flickers at a high rate, you can't tell that it's flickering because it's beyond the ability of the eye to resolve it, and then you can't make a saccade to that location. So if you look at that, what you find here, after parvo lesion and after magno lesion, you get a gigantic deficit after magno lesion. And that fits with what I told you in the beginning-- namely, that the midget cells respond in a fairly sustained fashion when the stimulus comes on. Whereas, the parasol cells respond transiently, which makes them much more readily available for motion and for flicker. So those, then, are the major arrangements that we see with these various experiments. And so we can summarize what happens after parvocellular and magnocellular lesion and say what happens here. So this one here is when you block the midget system. This is where you block the parasol system. And here are all the various test that had been used. And you can see that for the parasol system, the major deficit arises in motion perception and flicker perception. So those two are very important in the processing of the parasol system. Whereas, for the midget system, you get lots of deficits in color vision, texture perception, and fine pattern perception, fine shape perception, in contrast sensitivity, and in stereopsis. So this, then, gives you a sense of what these two systems are important for in processing visual information. And so, what you can do next is to summarize what I just told you-- namely, that the midget system is important for color, texture, fine form, and fine stereo. The parasol system is important for fast flicker and fast, low contrast motion. Both systems are capable of doing brightness; coarse form; coarse stereo; slow flicker; slow, high contrast motion; and scotopic vision. So it is not a simple arrangement. It's a complex arrangement. There is overlap in what both systems can do. So now, the big question comes up. Why was it so important to create both of these systems? Well, the number of schemes that have evolved, certainly, the motion is obvious. But one thing one can come up with is a scheme of this sort that has been proposed-- namely, that what happened as a result of these two systems emerging, your ability to process information has been extended. And for the midget system, you expanded the ability to see up to high spatial frequencies-- very, very important attribute. And I should add, also for the midget system, it became possible for you to see color. Whereas, when you come to the parasol system, it expanded the range of your ability to process information in the temporal domain-- your ability to see very fast motion, to see flicker. Now, let me add here that's a very important attribute, by the way, for animals as well as humans because one of the common things that has emerged in the course of evolution is what is called camouflage. I think I may have mentioned that once before. That is that when you or an animal, and the coloring of you, which is done in thousands of animals, is made similar to the background, then it's very difficult for you to see that particular animal. So it has an excellent camouflage. However, as soon as this animal begins to move, the ability to camouflage itself disappears. And animals have to move. And that's because the parasol system is extremely sensitive to motion. And so whenever an animal begins to move, even though it has excellent camouflage, it is predominantly the parasol system that will destroy the camouflage effect. I'm not saying this is necessarily the only hypothesis, but this is one hypothesis that had been advanced to try to account for how sensible it is to have these two systems. Now, of course, you could ask, how come this couldn't be just done in one system-- all of it? Well, there are several reasons for that-- is that to combine a situation where you have a sustained response and a transient response into one is next to impossible. And to have cells that are highly sensitive, or the parasol cells, you need to have a convergent input from several photoreceptors. But as soon as you have that, you lose your ability to see fine detail. So these requirements to do this and this are antagonistic to each other. And it's not possible to do that in a single cell at the retinal level. So it was decided, inasmuch as they should decide, that it will be best just to create two separate systems however complicated that might be. So that, then, brings me to the overall summary of what I had covered today. First of all, I told you-- that's the obvious part-- are two major channels that originate in the retina are the midget and the parasol. I should add here that there are many other channels in the retina, but the cells that do that-- and we'll talk about some of those later-- are much less numerous. The overwhelming majority of cells in the retina are either midget and parasol. Then in central retina, the receptive field center of the midget cells and parvocellular cells is comprised of just a single cone. The midget cells and the parvocellular cells-- meaning that the midget projects to the parvocellular layers. And I told you before that the midget cells of the retina have receptive fields which are quite similar to those, almost identical to those that you see in the parvocellular layers of the lateral geniculate nucleus. Then the parasol cells have much larger receptive fields than do the midget cells. The cone input is mixed both in the center and the surround. And that is the reason why this system, the parasol system, cannot tell you anything about color, but it is very sensitive and can tell you about any change that occurs out there over time, irrespective of the kind of color it has. Now, the midget and parasol cell ratio from center to periphery changes from 8 to 1 to 1 to 1. Now, I should reiterate that in that it is very important for us to be sensitive to motion in the periphery. That's if an animal is threatened by some predator, it is highly desirable to be able to very sensitive to any motion. And because of that, you have a higher number of parasol cells in the periphery than in the center. And so the ratio changes to become equal in the periphery. Now, the midget and parasol systems converge on some cells in V1. And the example, the prime example, I've shown you when they were separately blocked at the level the geniculate, that the cell I showed you was one that did receive a convergent input. But many other cells receive single input. V4 receives inputs from both the midget and the parasol cells. So it's not an area that only deals with color, obviously. And what it will deal with, we'll describe in more detail later. The major input to MT is from the parasol cells. And that, often, MT and MST have often been called the motion areas. But they also play important role, by the way, in depth perception, as we shall see. And then the midget system extends the range of vision in the wavelength and high spatial frequency domains. The parasol system extends the range of vision in the high frequency domain. And in the scheme that I showed you is what leads to this particular conclusion. So that, then, is the essence of what I wanted to cover today. And I am now certainly open to any questions that you might have about these two fascinating systems that have evolved in the retina over the millions and millions of years of evolution. Well, I was so clear that there are questions, huh? Well, I hope that you sort of gotten a sense of what it is like to do these kinds of experiments and how luckily, at least, in some cases, these experiments can lead to nice discoveries as to the workings of the visual system. Now, last time when we talked about the APB, we did have a "magic bullet." This approach here is not as neat, really, because you have to make lesions rather than specifically affecting certain neurotransmitters. And that could not be done because the neurotransmitters for the midget and the parasol cells are similar. So you can't use that kind of procedure that we were able to use miraculously for the ON and OFF channels. So that, in essence, is what we are going to cover today. Did everybody sign? Any one of you not sign the attendance sheet? If not, please come up here and sign it. And then let me just say again that next time, we are going to talk about color-- another fascinating topic. And we are also going to talk about visual adaptation. And if you can, if you get a chance to do so, please try to read the preparatory material which should make it easier for you to not only comprehend but, especially, to be able to memorize these facts. And the memorization is a very important part of learning things in any course. And it will, of course, be essential when you get to the stage of having to take the exams. Well, thank you so much.
MIT_904_Sensory_Systems_Fall_2013
15_Hair_cells.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Maybe I should go over real quickly what we did on Monday. We can start out with we covered the various parts of the auditory periphery-- the external, middle, and inner ears. And we had a paper on the function of the external ear and how it can help you localize sounds when you distort it by using clay molds that you can put in subjects' pinnae. Your sense of localization of sounds, especially those at different elevation, is all screwed up. And you can relearn because you still have some bends and quirky geometry in your pinnae. Even with a little clay in there, you can relearn using new clues to localize sounds in elevation, but it takes a little bit of time. So the subjects in that paper relearned how to localize sounds. Now, there are other cues that we'll be dealing with for localization of sounds in azimuth. And that's where you use two separate ears. For example, if a sound's off to my right, it's going to hit my right ear first. And then because sound travels at a finite velocity in air, it's going to hit my left ear second. So there's a delay between the two ears, and that's a primary cue for localizing in azimuth, in the horizontal plane. So one comment I had after class was that was a good paper on relearning sound localization with new ears. Someone came up and said they had read that in another discussion group. And I think it is a very good paper, because it deals with the function of the external ear. It deals with how you can relearn new cues. Let's say if when you grow up, your pinnae are getting bigger from being an infant to an adult. So you have to relearn those cues over time. We talked about the function of the middle ear, securing efficient transmission of sound energy from air into the fluid of the inner ear. And we talked about, of course, the physical characteristics of sound, what sound frequency is, what sound level is. We talked about simple sounds-- that is, pure tones or single frequencies. More complex sounds like musical sounds that have a bunch of frequencies, and even speech sounds that have a whole bunch of different frequencies and they're changed a little bit. And the change is very perceptually obvious in the form of changing from one vowel to another vowel. And so another comment I got by email, someone said, well, can you tell me where you're from? Because you have a Midwestern accent. OK, speaking of speech sounds. And that's a very interesting comment, because I've been in the Boston area since 1983. And that's 30 years. And I am from the Midwest. In fact, I brought my Midwestern clothes today. I got my hat and my Michigan sweatshirt which I got out of the rag pile. Or is it my Harvard sweatshirt? Let's see. It says, "Harvard, Michigan of the East." So I guess it's my Michigan sweatshirt. It's got the Michigan colors, maize and blue. And so I grew up in Ann Arbor, Michigan. I probably-- it depends on who you are, you have to sort of listen for it. According to my wife I still have it, and maybe some of you can hear my Midwestern twang. So it's an evidence that you have very good hearing that you can hear these different not only vowels but accents. So that's kind of interesting that I was reading a book just last week. And it's called Our Boston. You can get at the MIT COOP. And one of the chapters is on regional accents. And I'll do a little reading maybe just at the very beginning of this chapter which talks about regional accents and linguists. I'll just read you the first sentence. It says, "As most linguists might tell you, regional accents are a lot like underpants. Everybody has them, and usually no one notices his or her own. But the world would be a very different place in their absence." So that's what regional accents are. How many of you guys are linguists, or interested in linguistics? Just a few, OK. Well, of course is it's a fascinating science, and it intersects with the auditory world. So today we're going to launch into the next aspect of the auditory periphery which is the inner ear. And the inner ear, the scientific name, of course, is the cochlea, from the Greek snail shell. We're going to talk about the anatomy of the cochlea. This is the cochlea here. We're going to talk about the vibration patterns in the cochlea. Because of course, the cochlear structures are vibrating in response to sound. And the original pioneer in that area was of course George von Bekesy, who was a Hungarian physicist who worked for the telephone company, later came to Harvard and won the Nobel Prize in 1961. So we always put him up on the pedestal because he is the only winner for the hearing field for the Nobel Prize. So those in the vision system, you can pull out all these people who've won the Nobel Prize. In the auditory field, there's one winner. It's George von Bekesy. So we'll talk about his work on cochlear vibration patterns. Then we'll talk about the receptor cells for hearing, which are inner hair cells and outer hair cells. There are two types, a little bit like you had the rods and cones in the retina. But these have very different and complementary roles, and we'll talk about the separate functions of those two types of hair cells. We'll talk about the outer hair cell electromotility. We'll go into what that is. And finally if we have time, we'll end up with otoacoustic emissions, which are a very interesting research and clinical tool for testing here. So cochlear anatomy, here is an early drawing of the cochlea by DeVerny. Back in the 1600s, they already knew that the cochlea was like a snail shell. And in the middle of it, it had a membrane, and the membrane went all the way from the base to the apex. This very basalmost part is called the hook. And then the membrane spirals around and goes to the very apex. And so I have a very simple wire model that I can hold up, do the same thing. I can bend it any way I want to. Here's the hook. You can see this in three dimensions here. OK, so the cochlea is a spiral shaped structure which has a membrane in it. The name of the big membrane-- it has several membranes-- the name of the big membrane is called the basilar membrane, which of course stands for base. And we'll see that the hair cells and the other cells associated with them sit right on top of the basilar membrane. We can go to that in just a second. Anatomists more recently like to cut sections through structures. And that's because the light microscope, the electron microscope, can resolve very small things like cells and parts of cells very well in thin sections. So if you cut right down the middle of that snail shell and took out a thin section, you'd get this view. So of course, all this gray shading on the outside and even parts of the inside is bone. So this is a bony structure. It's very different from the eyeball, which is all soft tissue. This is a bony structure. In the middle, there is a nerve coming in. That's the auditory nerve. The auditory nerve starts in the cochlea and sends messages to the brain, which is down here. It's been cut away. You can see the tube of the snail shell, or cochlea, is subdivided by this big membrane, this basilar membrane. And that there's another membrane in here too. The name of that membrane is Reissner's membrane, but it's not that important. But the picture you get here sort of looks like steps on a ladder. And the Latin name first steps is "scala"-- or "scalae," if it's plural. This means steps. For instance, a musical scale is whole steps, and every now and then a half step. So these, to the early anatomists, looked like steps on a ladder. But actually what they are, are fluid-filled compartments separated by membranes. And there are three fluid-filled compartments here. And this is another cross section of just one turn of the cochlea. This compartment is called scala tympani. This compartment is called scala vestibuli. And this one, which is in the middle, is called scala media. So there are three scalae in the cochlea. So scala tympani, vestibuli, and media, or middle. And the hair cells sit on this membrane that separates scala tympani from scala media. And the hair cells here, the inner hair cells and the outer hair cells, are surrounded by a whole bunch of other cells. And that whole receptor organ is called the organ of Corti. So it says that down here, the organ of Corti. And Corti was one of the first Italian anatomists to really draw the structure correctly, although he got it a little bit wrong. These round circles are the outer hair cells. If you cut this in cross section and look at this in a magnified version, there's only one inner hair cell. He put two in his drawing, so it's a little bit incorrect. But his name is now attached to the organ. The organ of Corti is where the hair cells are, and there's a whole bunch of other supporting cells that keep the hair cells in their place. OK, so the receptor organ sits on the basilar membrane. And the organ of Corti's there. There's this other membrane that separates scala media from scala vestibuli. It's called Reissner's membrane, but it's not important. OK, so that's the structure of the organ of Corti. And during the 1930s and '40s, people started thinking about, well, how does this thing move when you stimulate the ear with sound? And George von Bekesy, as I've said before, was the first to make really good measurements of the motion of the basilar membrane in response to sound. And this is a cutaway diagram of his experimental setup. This is the cochlea in cross section. Of course, he had the whole cochlea there. And he used cochleas from cadavers. He would go to the mortuary, get a temporal bone and drill away all the bone and be left with the cochlea and put it in his Petri dish. He had a huge sound source that he would apply right to the oval window. He would take out the stapes and drive the cochlear fluids directly. And as we said last time, if you have sound in air and are trying to get it in fluid, you really to crank it up. And so it's a huge sound source. And he stimulated way up at 140 dB, way at the top end of the curve we had last time. He needed to drive it that high not only to get sound into the fluid but also so that his measurement system could pick up these movements. The movements tend to be very, very small. In response to sound, maybe the basilar membrane is moving only a few nanometers. OK, but it's going to move a lot more nanometers if you blast the heck out of it. And his observation device was simply a microscope, a water immersion lens that he brought down and focused on the basilar membrane to see the thing move. So he didn't have very good resolution. Maybe he could only see micrometers in terms of movements, so he had to turn up the sound very high to get the thing to vibrate. So we can criticize his experiments now for several reasons. Number one, we don't listen way up at 140 dB. In fact, those kinds of sounds are actually damaging to the hair cells. But we can also say there weren't any hair cells. It's a dead preparation. It's a cochlea from a cadaver, OK? And what we know now is that the movements and vibration patterns of the cochlea are very different in a dead preparation than they are in a living preparation. We'll get to that in just a minute. In spite of those problems-- I mean, von Bekesy was working in the '30s. He didn't have very good measurement devices, he had to use what he had. He discovered some very important things. One of the things was that the basilar membrane-- and here's a diagram of the basilar membrane. It's this horizontal line here, if there's no sound. And it's stretched out now as if you took the snail shell and unwound the coil and made it a long, straight basilar membrane. So the base is over here and the apex is up here. And we think that that doesn't change the vibration pattern at all. The only reason that the cochlea is coiled is to save space in your head. Otherwise, it would be pretty long. So unwinding this is just a convenient way to look at the vibration pattern. He measured in the intact cochlea, of course. This pattern of vibration that von Bekesy observed was called a traveling wave. OK, so this-- traveling waves in the cochlea. And basically what that means is these are snapshots-- one, two, three, and four. At one instant, the wave pattern looks like number one. At the next instant, it moves, or travels, and looks like number two, and so on and so forth, three and four. So the traveling wave starts in the base and travels up to the apex. And remember, the base of the cochlear is where the input is coming. If we go back to our first diagram here, remember, the stapes is pushing in and out at the oval window. And that's way down at the base of the cochlear. And what we're seeing is these vibrations are traveling from the base all the way up to the apex, and it's taking a little bit of time to do so. So that's what von Bekesy discovered-- there's a traveling wave. The second thing he discovered is that the peak of the traveling wave-- you notice these dash curves draw the envelope or the maximal point of movement of this basilar membrane. There's a peak of this traveling when the basilar membrane is vibrating the most. That peak changes as a function of the frequency of the sound that you used to stimulate your preparation. And this diagram shows that, but I have a little better movie, or demonstration, that shows you a traveling wave that's a little bit easier to understand. So I actually have two of them. These were made by Chris Shera at Mass Eye and Ear Infirmary. And again, this is the basilar membrane stretched out in a long line from the base of the cochlea up to the apex of the cochlear. And this is a situation with no sound. So now we're going to turn on a sound that is a low frequency tone. So tone is synonymous with pure tone or a single frequency. So it's just going to be one frequency. And the input comes in here at the stapes. And you'll see the pattern of movement of the basilar membrane in response to the low frequency tone. Whoops, pressed the wrong button. It takes a little bit of time for this to develop. And then you can see the complex pattern, or traveling wave. It seems to start here and go up to here. It goes up to a peak and then quickly comes down. There's almost no movement right at the apex there. So one thing about that movement that I said before is there's a peak of vibration at a certain place along the basilar membrane. The other thing is that at any point along the basilar membrane that's moving, it's moving as a function of time. And actually, if you knew what the frequency of that tone is and looked at the frequency of movement here, it would be the same. So the basilar membrane is moving up and down at the same frequency as the sound stimulus. Maybe that'll be a little bit more clear when I go to the next one, because the next movie shows the basilar membrane vibration to three tones-- the low one that we just saw, a middle frequency one, and a high frequency one. And as you can indicate, you can probably get from my hand movements, the low frequency tone is going to maximally vibrate here toward the apex. The middle frequency is going to be maximally vibrating here, and the high frequency is going to be maximally vibrating over here. OK, so there's going to sort of a frequency analysis along the length of this basilar membrane. The other thing I want you to notice when the movie is playing is how fast these three places along the basilar membrane are moving, see if they're moving at the same speed or at a different speed. So this is the one we had before. This is the middle frequency and this is the high frequency. See how much faster this is going back and forth than the slow one, and see that it's peeking in a different place? OK. So this is, we think, due to the physical characteristics of the way this membrane is set up. For example, down here the membrane is fairly tense and it's very short in width. It's like those little tiny strings on the piano way up at the right end of the piano for the high notes. And it's naturally going to vibrate at high frequencies. When you get up toward the apex, everything gets real wide in terms of the membranes. The cells get bigger. Everything's heavier, and it's naturally going to vibrate slower. And it's going to vibrate best for low frequencies. OK, so there's this analysis along the length of the cochlea in terms of high frequencies are mapped here, mid frequencies are mapped here, and low frequencies are mapped here. There is, if you will, a place code for sound frequency. So sound frequencies are broken up into a place code-- low frequencies apically, and higher and higher frequencies basally. Secondly, if you buy into my frequency, or our timing of the movement, there's also a timing code in that high frequencies are vibrating much quicker and low frequencies are vibrating much slower. And as we'll learn in a week or two, the nervous system keeps track of both of those things. There are hair cells down here in the base of your cochlea which are connected to the nerve fibers that only innervate them and are only active when there's a high frequency sound stimulus. They send that message to the brain. The brain says, ah, here are some high frequency or high pitched sounds. Conversely, there are hair cells up here that are only responding when the basilar membrane is stimulated with a low frequency. And they're telling their nerve fibers only send action potentials to the brain when there's a low frequency sound stimulus. So what are examples? Everybody knows what a low pitched and a high pitched sound is. But for example, one of the ways you tell a male voice from a female voice is that female voices have higher pitches to them, higher frequencies, maybe an octave higher. So immediately usually, even over the telephone, you can tell I'm talking to a female speaker because the frequencies are higher and they're mapped more toward the basal end of your cochlea. Male voices that have lower frequencies are mapped to more apical positions. Any questions about that? Now how do we, in the modern times, well beyond von Bekesy, how do we measure these motions? You're sort of taking it for granted from me that we can make these measurements. Well, von Bekesy used an ordinary light microscope. And he really had to crank things way up to see the structures moving. And we want to measure them down near where we usually hear, 20 dB SPL, 40 dB SPL. So what's used is, on to the basilar membrane, you put some sort of source. And maybe in the 1980s and '90s, people were using radioactive sources and using the Mossbauer technique. They knew that the radioactive particles being emitted had a certain frequency, and when the basilar membrane was moving toward you, the frequency would be changed versus moving away from you. Nowadays in the '90s and 200s, people are using spots that they put down, like little reflective disks that are put down onto the basilar membrane that reflect light. And you shine down a light onto the basilar membrane. And that spot reflects the light back to you. You can compare the light you sent down with the light that's reflected back. If the membrane is moving away from you, that light will be shifted to a higher wavelength versus if it's coming toward you, it'll be a lower wavelength. And this process is called interferometry. And if the source is moving like the basilar membrane is moving when you stimulate it with sound, you can use a laser to shine the light, and you can use the Doppler interferometer-- which is a device that measures the shift in light when there's a moving light source or a moving light reflection. You can calibrate this in terms of the displacement or the velocity that the object doing the reflecting is giving. It's shifting the light, basically. So that's how most modern experiment experiments measure of the movement of structures, even though they're very, very small movements, like in terms of nanometers. So let's see what this-- these are data from the 1980s using a Mossbauer technique, a radioactive source. So instead of reflecting light, you're measuring the emitted wavelengths of particles from a radioactive source at one point on the basilar membrane. And you're stimulating the basilar membrane with sound of different frequencies, and you're measuring how much it moves in terms of the basilar membrane. I think this says displacement. Is that what it says? It says amplitude in dB. This is a displacement scale, how much movement you're getting, as a function of sound frequency for one particular point on the basilar membrane. So you change your sound frequency. Start with low frequencies at 20 dB. Pretty low level sound. You don't find much displacement. Go up to 6 kilohertz, find a little bit more movement. Go up to 10 kilohertz, it's getting bigger. And then all of a sudden at about 16 kilohertz, you get a huge amount of movement. You're right at the peak of the traveling wave. Go up to 18 kilohertz and it goes way back down again. And 20 kilohertz there's no point plotted because you can't get the thing to move at all. It's a very sharply-tuned function of movement in terms of the sound frequencies. This place is only moving-- or mostly moving-- for a sound frequency of 16 kilohertz in terms of where the source was. Change the sound level to 40 or 60 or even 80 dB-- 80 dB is a sound that is certainly within conversational speech. It's a high level sound, but it's not painful or damaging at all. In that case, the function is much broader. If you just looked at that function, you'd say, well, this point on the basilar membrane years is responding to many sound frequencies. It's not responding to 20 kilohertz, but it's responding to everything below that. And if you were to-- they didn't here because they wanted to take care of their preparation and not damage it-- if they want up to the levels that von Bekesy used, 140 dB, it would be completely flat. And von Bekesy found that the tuning of the basilar membrane was very flat, or just broadly tuned. But these modern measurements show extremely sharp functions for a single point of movement on the basilar membrane. This is a plot of the same data, but plotted a little bit differently. This axis is still sound frequency. And now we're going to plot it on the x-axis. It's still a dB scale, but now on the y-axis it's threshold dB SPL. So what's threshold? Well in this case, it doesn't really make sense. I think it would've been better to label this some criterion displacement, because the criterion displacement use for this lowest curve is 0.35 nanometers. And what the experiment now, or what the plotting now is how much sound level do we have to crank into the system to get this point on the basilar membrane to vibrate 0.35 nanometers? OK, in the case of 16 kilohertz right here, at the lowest point, we only had to put in 10 dB, not very much sound at all. At 14 kilohertz, at 19 kilohertz, we had to turn the sound up a little bit. At 20 kilohertz, we turned up the sound so much, but we said could never get it to vibrate 0.35 nanometers. So there's no point plotted there. At 5 kilohertz, we had to crank up the sound to about 60 dB to get this point to vibrate 0.35 nanometers. So you're asking for the point to give you some specified amount of vibration or a criterion of vibration. And you're plotting this function here. And this is a very important curve. We'll be seeing this many, many times the rest of the semester. And it's called a tuning curve. So you should all be familiar with how this tuning curve is generated. You can make a tuning curve for a recording from a hair cell. You could make a tuning curve for recording from a nerve fiber. Let's say we put on an electrode in the auditory nerve. You guys have talked about this term, this technique called single unit recording where you put an electrode in a nerve-- say, the optic nerve-- and you measure the spikes coming from the single axon that you're recording from. And you might say, well, I'm going to turn the sound up until I get 10 spikes per second. That's my criterion. And I'm going to change the frequency. How much sound level do I have to stimulate the ear with to get the auditory nerve fiber to fire 10 spikes per second? Well, at 16 kilohertz I have to hardly put any sound in at all. But 2 kilohertz, I have to really blast the thing. So this auditory nerve fiber is very tuned to the sound frequency. It's sharply tuned, OK, to a frequency near 16 kilohertz. You could make a tuning curve from a receptor cell by saying how much receptor potential do I want? Let's say one microvolt of response or receptor potential from the hair cell. What sort of sound level do I need to dial into the ear at these different frequencies to get that criterion receptor potential? Is that clear? So these tuning curves are omnipresent throughout the auditory pathway. And they're a measure of the sharpness of tuning. They're a measure of whether one place on the basilar membrane or one auditory nerve fiber can listen to 16 kilohertz and ignore 20 kilohertz. It's very important if you're trying to know in the pattern of harmonics that harmonic number two is missing, you better listen to number two with a very sensitive and sharply-tuned function. And you have that in the auditory system starting with the vibration of the basilar membrane. What are these receptor cells for hearing? So the receptor cells for hearing are called hair cells. That's kind of a funny name, right? But they get their name from the fact that they have hairs coming out the top of them-- at least, it looked that way to the early neuroanatomists. So these hairs are more properly called stereocilia. And let me write that down here, stereocilia. And even that more proper term is kind of a misnomer, because it has as part of it "cilium." That's really a misnomer. What is a cilium on a cell? Does anybody know? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. And they can propel single cells through a medium, right? And they're wavy and floppy, and if you look at them in cross-section of the electron microscope, they have these sort of 9 plus 2 tubular arrangements. The stereociliae do not look like that at all. They should be called stereomicrovilli, because when you look at them in cross section you see all these filaments but there's no organization to them. And you can see the filaments right here in longitudinal section going up into the stereocilia. And these keep the stereocilia very stiff. So when sound comes and moves a stereocilium, it's sort of like a telephone pole in a hurricane. It pivots around its base, but it doesn't flop around. And the base of these stereocilia are right as they insert part of the cell right. They're very stiff structures. Furthermore, they're all attached one to another. So all the stereocilia on a hair cell tend to move together back and forth in response to sound. And especially, they're attached, of course, at the base, but they're also attached at the very tips in something called tip links. And I think I have a picture or a diagram. So here are some tip links that connects the very tallest part of one stereocilium to its next tallest neighbor stereocilium. So the whole bundle of stereocilia move together in a rigid, pivoting way. And the whole bundle is usually called just in jargon terms, the "hair bundle" at the top of a hair cell. And I have some numbers for you. How many stereocilia are there in the hair bundle of a given cell? So there are 40 stereocilia for one inner hair cell. We haven't talked about what these are, but these are in your cochlea. They have inner hair cells and outer hair cells. And there are about 140 stereocilia in the hair bundle of a given outer hair cell. OK, there are many stereocilia per cell. In this cross section, you're just seeing three rows, but there are many per row. And here's a whole big hair bundle all together. Now, why are these exact numbers important? Well, it isn't that important, except we think that the channel that opens up when this hair bundle moves in response to sound is located right at the tips of the stereocilia. And most people believe that there is one channel at the tip of each stereocilium which would suggest that each outer hair cell has 140 channels and each inner hair cell has 40 channels. Now, what are these channels? Sometimes they're called the transduction channels. What's transduction? Anybody? What's a transducer in engineering terms? Anybody? What does it mean to transduce something? Well, you can have-- yeah? AUDIENCE: [INAUDIBLE]? PROFESSOR: Exactly right. So you can have an accelerometer which converts acceleration to an electrical signal. In this case, the physical energy of sound is mechanical energy. It's movement, movement of the tympanic membrane, movement of the ossicles, movement of the basilar membrane up and down. These cells are sitting on the basilar membrane. And the hair bundle is moving back and forth. But the code of the nervous system-- we've been talking about spikes-- somehow you have to get that mechanical energy into the electrical energy of, in the case of hair cells, the receptor potentials, and in the case of the nerve fibers, the action potentials that are going to send messages to the brain. So the transduction channel is the channel that responds to mechanical energy and allows ions to flow into the hair cell-- that is, it opens up. And in the case of these transduction channels, it allows positive ions, mostly potassium because there is a high concentration of potassium in the fluid of scala media. Remember, these cells are sitting in between scala tympani and scala media. So scala media is up here. The transduction channel opens. And it's relatively non-selective, but the big positive-- it's non-selected for positive ions or cations. The big concentration here is potassium, so potassium is probably the ion that flows into the hair cells when the transduction channels open up. And the evidence for that here is in part from Hudspeth's lab. He probed around the hair cells while the hair bundle was moved back and forth. And he found big potentials up near the tips of the stereocilia and very small potentials in the rest of the cell. So there's very good evidence that the transduction channels are at the tips of the hair cells. We don't know what the transduction channels are. People are actively working on that. So there's a guy at Harvard in the Department of Neurobiology, Jeff Holt, who thinks the transduction channel, this so-called transmembrane channel, TMC, of which there are two varieties, 1 and 2. It's not clear if that's actually the channel or something associated with the channel. But when you knock it out, the hair cells don't respond anymore. The animal is deaf. OK, so these transmembrane channels may be the transduction channel, but the jury is still out. OK, so let me go back to our generic hair cell. We've talked about the stereocilia here and their movement. That's the input end of the hair cell. The middle part of the hair cell has a nucleus. It has a membrane. It has mitochondria. And down here, you might think of this part as the output end of the hair cell. This is where the hair cell is giving its information to the associated nerve fibers. And there are, interestingly, two types of nerve fibers. The one you would instantly think of is the afferent nerve fiber, the auditory nerve fiber, where the hair cell is sending messages to the nerve fiber. And how does one cell send a message to another in the nervous system? Well, there's a synapse, right? So there's a hair cell to nerve fiber synapse. And that's right here. And this hair cell then releases transmitter, because this is a chemical synapse. And it uses a neurotransmitter. Right, everybody knows what neurotransmitters are. The neurotransmitter here is probably glutamate. And so that's an excitatory neurotransmitter. OK, so the hair cell then releases the glutamate. It goes through the synaptic cleft. There's a glutamate receptor that it binds to on the auditory nerve fiber. The auditory nerve fiber is depolarized or excited. It starts to fire action potentials. And then these action potentials are then sent down the auditory nerve and into the brain. The brain says, aha, there's a sound. Now interestingly, there's another kind of nerve fiber associated with many hair cells, and it's called an efferent nerve fiber. So what's the difference between an afferent and an efferent? anybody? AUDIENCE: Directionality of the cell? PROFESSOR: That's right. So, which one goes which way. We've said this one is going-- the afferent is going that way. The efferent nerve ending is going the opposite way. And so most of this naming nomenclature comes from the reference being the brain. The brain is where the action is, right? The cochlea's way out here. So anything that's going out from the brain is efflux or efferent, so signals that are going from the brain out to peripheral structures-- like the hair cells-- travel by way of efferent nerve fibers, right? Another efferent type of nerve fiber would be a motor neuron, a motor neuron sending messages from the brain out to the periphery to contract the muscles. That'd be another type of efferent nerve fiber. So interestingly, the hair cells have efferent nerve fibers attached to them, and they form synapses on the hair cells. And you do not see this in the visual system. You do not see efferent innervation of the receptor cells or of the retina at all in vision, at least in mammalian systems. You do see it in lower vertebrates. But in mammalian systems, the auditory hair cells get an efferent innervation. There are hair cells as another part of the inner ear. We talked about the vestibular system and the semicircular canals. That is a hair cell organ, and those get efferent nerve endings from the brain. And in the sides of the body in amphibians and fish, there's a lateral line system of hair cells that allows the fish to detect currents of motion of water. And those are hair cell-based receptor organs, and they get an efferent as well as an afferent innervation. So it seems like wherever there are hair cell-based systems, the brain sends messages out to the hair cells as well as getting information from the hair cells. So we'll have a lecture later on what these efferent nerves are doing. I mean, why would the brain want to control the auditory periphery? It's an interesting question, and maybe there are several answers to that. Certainly, the brain wants to know what's happening to the hair cells. So these afferent nerve fibers are sending messages when the hair cell is stimulated. So that's the main pathway going into the brain. OK, we've talked about the transduction channels. We haven't talked about these little things called tip links. One idea is that these links between the tips of the stereocilia are sort of like a rope on a trap door. And when the stereocilium moves, it opens up the trap door, and that's what opens the ion channel. These little tip links are something you see in the electron microscope. They're very, very fine. When you use some chemical treatment to dissolve those tip links, the hair cells don't work anymore. They don't respond to sound anymore. So there are some other proteins up there associated with the transduction channels called tip links. That's a very active area of research now-- what are the transduction channels? How do the tip links work? How are they mechanosensitive? OK, now let's talk. We've been alluding to the two types of hair cells, right? Outer and inner hair cells. And all mammals have these two types of hair cells, outer and inner hair cells. Birds have hair cells that are also of several classes. They have what are called tall and short hair cells. They're a little bit different than outer and inner hair cells. Reptiles generally have one type of hair cell. So the inner and outer hair cell distinction is true for mammals mainly. So how do they get their names? Well, if you look down on the top of the organ of Corti-- so this is my wire model. So this is like the cochlea we've been looking at. Now, turn it so you're looking down from the apex down. On top of this membrane, what you would see if you looked at one little tiny piece is a row of inner hair cells going from the extreme base to the apex, which would be this row right here. And three rows of outer hair cells, going from the base to the apex, and it turns out that the inner hair cells get their name because they're on the inner side of the spiral. And the outers are on the outer part of the spiral. So the inners are toward the center, or toward the axis of the cochlea. The outers are away from it. And this view, looking down on them, would be looking down onto their stereocilia. And these white structures are the hair bundles-- the tips, if you will-- of the stereocilia. And there's one, two, three, four, five, six, seven, eight and a half inner hair cells here. And in the first row of outer hair cells, there's one, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve. There's a dozen outer hair cells. And their stereocilia are lined up differently. They're sort of like inverted V's on the outer hair cells. And there's the first row, the second row, and the outermost row, the third row of outer hair cells. And this is a stereotyped pattern. Almost all mammals have this. In the human, if you go up into the apex of the cochlea, sometimes a fourth row of outer hair cells starts to form, but it's not very well organized. It's sort of patchy. But you can have a fourth row of outer hair cells. If one of these were missing, the supporting cells nearby form a little scar in its place. And we'll talk about things that kill hair cells in a few weeks. Loud sounds can kill your hair cells. Infectious agents-- meningitis, for example-- can kill your hair cells. Certain drugs, aminoglycoside antibiotics like kanamycin can kill your hair cells. And the supporting cells thereby just fill in. And you can go and look at a cochlea that has some hair cell damage. You can count, you can see how regular the array is of hair cells. And you can count how many are present there and how many are damaged. And you can, you know, say if there's a 50% loss of outer hair cells in row one, this regularity is so beautiful. Unfortunately, once they're killed in a mammal, they never come back. You cannot regrow your hair cells once they've been killed. In birds, they grow right back. Takes about three or four weeks. So you can kill all the hair cells in a bird cochlea and come back three or four weeks later, and they're all grown back. In mammals, they don't grow back. And so there's a lot of interest-- Because you have these agents that kill hair cells-- there's a lot of interest in, how can we make our hair cells grow back after they've been killed? And people are thinking of stem cell approached or neurotropic drugs that might be good. So why don't they grow back? We don't know that at all. We do know that, for example, I'm in a department called ENT-- Ear, Nose, and Throat. Lots of the surgeons in our department deal with cancers of the head and neck, right? Because there are cancers that can come up, and surgeons can take care of that by taking cancers out. There are no known cancers that grow in the inner ear. It never, ever happens. So maybe these cells are so far differentiated and they've become this classic inner and outer hair distinction, they're so evolved that they can't grow back. They can't multiply. They can't form a cancer. But unfortunately, once there destroyed by some agent like a drug, they can't grow back. So a way to put new cells in there that could become new hair cells, or a way to encourage these supporting cells on the sides to grow and become hair cells, is a very interesting idea, but one that's just in the research phase now. OK, so that's the difference between inner and outer hair cells. If you cut them in the other dimension and look at them in the longitudinal plane, inner and outer hair cells look completely different. An inner hair cell is a big fat cell. It comes up to a neck and bulges out a little bit where the hair bundle is, way up at the top. At the bottom of the inner hair cell, there's lots of afferent nerve endings. Maybe as many as 20 per hair cell. There's a few efferent nerve endings, but they're usually not on the hair cell itself. They're on these afferent terminals. The efferents come and get on the afferent terminals. The outer hair cells, by contrast, are completely different. They're long, test tube-like shaped cells. Down at the bottom, they also have nerve terminals. Most of the nerve terminals in this case are efferent nerve terminals. How do we know that they're efferent afferent? Well, if you look at them in the electron microscope-- I'll just draw a quick diagram. Here's the hair cell. Here's a nerve terminal coming up. And here is a whole bunch of synaptic vesicles in the hair cell ready to be released when the hair cell is stimulated with sound. Obviously, the transmission is going that way. Here's a nerve terminal. Here's a whole bunch of synaptic vesicles. None over here, they're all on the nerve terminal. When the message comes down from the brain, a whole bunch of these vesicles are released onto the hair cell. OK, so by just looking at the nerve terminals in the electron microscope, you can get an idea of which direction the transmission is going. And so label them as either afferent or efferent. On the outer hair cells, there are many, many efferent nerve terminals on them. In fact, in the 1970s, it became clear that almost all afferent nerve fibers, the ones that were sending messages to the brain, were associated with the inner hair cells, and only about 5% of them were associated with the outer hair cells. This is was a big mystery for a while, and people didn't believe it at first. It said, well, all the information going into the brain, or 95% percent of it, is coming from the inner hair cell. Huh, that's kind of funny. There's actually more outer hair cells than inner hair cells. So what's going on? Somebody screwed up the counts, right? So it was done over and over again, and the counts came back correctly. So how do we interpret that now? Well, about the same time in the early 1980s, a very interesting property of outer hair cells was noticed. It was noticed that outer hair cells are actually motile. They can move. And this was first discovered by Joe Santos Saatchi and others in the early 1980s. And this discovery was made when they put a hair cell in a fluid of high potassium, lots of potassium here. There are some potassium channels in the sides of the cell. Potassium went in. When you have positive ions coming into a cell, the cell depolarizes. It might have started out at minus 80 millivolts. In the high potassium solution, it may have gone to minus 50 millivolts or maybe even 0 millivolts. Bill Brownell and Joe Santos, when they saw this happen, they saw the cell shrink. It was, let's say, five micrometers in length before. They put it in the potassium solution, it became four. Take it out of the high potassium solution, put it in a regular solution, it lengthened again. OK, here is a graph of those data. In this case it's a more elegant experiment where they actually measured the potential inside the cell by putting an electrode into it. So you can run this out to your amplifier and measure the electrical potential in terms of the millivolts of the inside of the cell. And by putting current down or coming out of the cell, you can move the inside of this potential whatever you want, whichever way you want. And in this case, this x-axis in millivolts is the potential inside the hair cell. The ordinary potential is about minus 80 millivolts, about right here. Minus 180 is a huge hyperpolarization of the cell. And plus 0 up to plus 40 millivolts is a depolarization of a cell. Does everybody understand what we're doing here? We're changing the intracellular potential of the cell in terms of its millivolts. And then we're looking at the change in length. As you depolarize the cell, the cell goes from 0 to negative values. That's a shortening in terms of micrometers of the length of the outer hair cell. This is the basal end where the hair cells are, this is the apical end where the stereocilia would be. OK, so these hair cells can actually move. You can do this experiment with a muscle fiber and get the same result. It's a very different process, but when you depolarize a muscle cell, it contracts by actin and myosin means. This was very surprising though to see this in a sensory cell. Sensory cells aren't supposed to contract, right? They figured out that they are. They can contract. Now, that means this outer hair cell-- and I should say outer hair cell, these are outer hair cells because when you do the same experiment with an inner hair cell or any other cell in the body, you don't get a contraction. So it's peculiar to outer hair cells. Make sure you know that. Outer hair cells are the ones that are motile. This process became known as electromotility. OK, so when the inside voltage is changed, the length of the outer hair cells has changed. So let me give you a demonstration of this. I have a nice demonstration from Joe Santos Saatchi, who is now at Yale School of Medicine. And he made this demonstration. And it's kind of a clever demonstration because you can see the hair cell, and it will be moving because Joe has patched onto the hair cell with his electrode. And the electrode is-- usually, in these kinds of experiments-- is put on a micromanipulator and it's bolted to the table. And because that patch pipette has impaled the cell, that part of the cell is just pegged so it's not going to move at all. That's the very bottom of the cell. And you'll see a little bit of a ghost-like image of that. The rest of the cell is the long part of the hair cell, and then the stereocilia, or hair bundle, right up at the top. And that part is free to move. Now, what Joe has done is he's depolarized and hyperpolarized the cell using his amplifier. But he has made that signal sync'd to a musical signal, OK? So you'll hear music on the soundtrack and you'll see the hair cell moving. The music is not directly stimulating the hair cell by moving its stereocilia as normally it would be, as when we listen to it. Instead, that music is just in sync with the electrical signal manipulating the inside of the hair cell. So I'll play that now. Once again. OK, so here's the hair cell. This is the shadow of the electrode holding this stiff. This is the main part of the cell. These are the stereocilia right up there. OK. So apparently Joe, lectures to medical students, says there's a reflex that goes directly from your hair cells to your dancing feet. And apparently, the medical students believe him. Anyway, that is clearly a demonstration that this electromotility is much faster than, for example, the contraction of muscle cells. Muscle cell contraction, put it in a dish, or even hair cell in a dish-- it can be very slow. This electromotility is happening at audio frequencies, right? Some of those sounds were thousands of times per second. So in the early debate of electromotility, there were some questions of how fast this is and whether it really manipulates movement of the hair cells at audio frequency. And I think that kind of demonstration clearly shows that it does. So what good is this? How does this help us in the sense of hearing? And what good would it be to have a motile sensory cell? Well, the idea is that these hair cells-- here again are the inner hair cells and the three rows of outer hair cells. Sound comes into the ear. It moves these membranes. It moves the hair bundles. Motion of the hair bundles opens up these transducer channels which allow ions to come in and depolarize the outer hair cells, let's say. When the outer hair cells are depolarized, they shorten. OK, when the sound phase reverses, the hair bundle moves the other way, the channels close. The hair cells go back to their normal length. And this goes back and forth, and in the outer hair cells are getting longer and shorter. Somehow, that motion, that mechanical energy, adds to the vibration that was initiated by the sound in a sort of an amplification mechanism. So you then have more vibration. You then have more bending of the stereocilia. You have more depolarizing of the hair cells. You have even more electromotility and sort of a positive feedback loop here. The outer hair cells then are sometimes called the cochlear amplifier. They amplify the vibration patterns set up by sound in the cochlea. So the outer hair cells are sometimes nicknamed the cochlear amplifier. So what good is a cochlear amplifier? Well, you have this ordinary receptor cell over here. It doesn't change its length at all, but it has all the auditory nerve fibers linked to it. Now instead of its stereocilia moving just a little bit, it has an amplifier sitting right next to it and amplifies to all these membranes. And the inner hair cell stereocilia are really now waving and back and forth. They then send their messages to the auditory nerve fibers, which send their axons and messages into the brain. Your brain says, I hear sound. Even when it's a very soft sound, like a pin dropping-- which, without the cochlear amplifier, would be inaudible-- that pin dropping, that very small mechanical motion is amplified. And the inner hair cells then say, oh, yes I do hear the sound. So the function then of these receptor cells is very different than what you have in vision, where you had rods and cones, right? Rods and cones mediate different types of vision. In the cochlea, the hair cells work together. The outer hair cells ore the cochlear amplifier making this thing amplified more, and more sensitive. The inner hair cells then are the major receptor cells that are sending their messages to the brain, OK? So it's really a different kind of two receptor sense, if you will, compared to vision. And that's what this diagram is supposed to be. This is very fanciful diagram. And there's a lot of hand waving here associated with how the cochlear amplifier really works. This is an unraveled cochlea from the base to the apex, and this is how much displacement you have-- von Bekesy's traveling wave envelope, if you will. This dashed line is what would happen if you just put sound in and there was no amplifier. And this enhanced solid line is when you have the outer hair cells working their cochlear amplifier magic. And apparently, the active region where the outer hair cells are most active is just basal to the peak of this traveling wave. And how do we know that? How do we know that the outer hair cell cochlear amplifier is very important? Well, there are actually situations when you can lose your outer hair cells and you have pretty intact inner hair cell population. So in an animal treated with the aminoglycocide kanamycin, it turns out that the outer hair cells are a little bit more sensitive to the kanamycin then the inner hair cells are. So if you treat with just the right dose, you can find a place of the cochlea where there are intact inner hair cells and where the outer hair cells have all been lesioned. Basal to that, for example, all the hair cells are gone. And apical to that, none of the hair cells are gone. So in a certain region of the cochlea with just the right dose of kanamycin. And in those areas, where you just have inner hair cells without the outer hair cells, you have a huge hearing loss. You're not deaf. The inner hair cells are still there and they're sending their messages to the brain by their auditory nerve fibers. But there's a big hearing loss because you have lost the function of the cohclear amplifier. And those experiments were done in the 1970s and '80s. And they were criticized by saying, well, anytime you do some lesion treatment, you say you have normal inner hair cells and outers. Well, you don't really know the inner hair cells are normal. Maybe the drug affected them. They're still there, but they're screwed up. So recently, a much more elegant way of doing that same sort of experiment has come up. And this is the paper, the research paper, that is assigned reading for today's lecture. It turns out you can knock out the cochlear amplifier by knocking out a particular protein in the outer hair cells and have the outer hair cells still there. And this work started out with the discovery of a protein that's in the membrane of the outer hair cells. So if you look at the outer hair cells, there's a lot of protein in the membrane. And the protein is found pretty much nowhere else in the body and was given the name prestin. Now, you amateur musicians out there, when you play a piece-- right?-- at the beginning of the piece, at least for classical music, the composer gives you an Italian word that says how fast you should play it, right? And if it says largo, you're supposed to play it really slowly, Right or adagio, slow, right? What's the marking or Italian word for "fast"? AUDIENCE: Presto. PROFESSOR: Presto, right. And this protein was named prestin because, at least at the time it was discovered, they had the idea that, oh, OK. Maybe it's the cochlear amplifier protein and it makes these outer hair cells shorten and contract really quickly, very fast. So we'll call it prestin. It turned out that that was true, and here's some of the evidence in support of that. You can knock out the gene for prestin. So a knock-out is an animal in which a particular gene is either removed or made so it doesn't make the protein. Part of it's deleted, and so the protein is not made. You can make a knock-out mouse where the prestin is knocked out. And that's what was done in this paper. OK, and in that knock-out mouse, you can look-- and they looked at a whole bunch of things. That looked at the electromotility the outer hair cells. So they took outer hair cells and put them in a dish and looked at the kind of movements we saw in that video. And in this trace, this is minus minus, which is the geneticist lingo for a knock-out. Both genes for prestin are gone. This trace is the plus plus, the wild type, or normal. And there's big changes in outer hair cells when you depolarize and hyperpolarize them. And these big changes are on the order of half a micrometer. So this is a length axis. In the knock-out, the outer hair cells don't change length at all when they're depolarized and hyperpolarized. In the heterozygote, which is the case when you have one gene intact for prestin and the other gene is knocked out, it's an intermediate result. OK, so the outer hair cell motility is knocked out by knocking out this one protein. So that's pretty good evidence that it's involved in the cochlear amplifier. What else was measured? Well, they wanted to measure hearing sensitivity. And so in humans, what you might do for that is put a subject in a soundproof chamber and say, raise your hand when you hear the sound. But you could do that in mice, but it takes a long time to train mice or other experimental animals to do those behavioral tests. So they did an electro physiological test. They measured what's called the auditory brain stem response, so the ABR. This stands for auditory brain stem response. OK. And that top right graph gives you the ABR threshold in dB. So something that has a really low threshold is a really good hearing animal. This is the wild type and the heterozygote. And something that has a very high threshold means you really had to crank up the sound to get any kind of response at all as you see in the open symbols for the knock-out. So how do they measure that ABR? You could do it in animals or humans as a clinical test. Put electrodes on the surface of the skin. You turn on a click or a tone burst. In this case, they used tone bursts of different frequencies. And you can imagine that the auditory brain stem is way down in the head, and you're measuring on the surface. So you've got a click, click, click. And you measure thousands of responses. So there's a lot of noise. There's noise in the room. There's noise from other neurons in the brain. But eventually, after thousands of averages, that little tiny signal comes out of the noise, and you get a response if the brain stem is responding-- that is, if the hair cells are responding, the nerve fibers send messages into the brain stem, and the brain stem finally responds. It's a very good test of auditory sensitivity. And what does it show? It shows that without prestin in the knock-out animal, you have a huge hearing loss. How big is the hearing loss? Well, looks like it's about 40 to 60 dB. So when you have prestin knocked out, you have a hearing loss of 40 to 60 dB. How much amplification does the cochlear amplifier give you? 40 to 60 dB. You're not completely deaf without it, but you have a severe hearing loss. Most of you would not be able to understand what I'm saying with a 60 dB hearing loss, unless you were sitting right up here in the front. What else do I want to say about this paper? Not too much. There are some problems with it. Any paper has problems. They found that, for some reason, all the hair cells were lost for the high frequency basal part of the cochlea. It's just a problem in some strains of mice that they lose hair cells in a certain part of the cochlea. So within these gray bars, you can't conclude anything. Now, they looked at the hair cells. They just took them out and looked at them in the microscope. And they said, wow, in the knock-out, the hair cells are actually smaller. Well, you cut out all this protein from the membrane in the knock-out, right? So if there's a lot less membrane there, they shrink. The outer hair cell membrane is packed with prestin. So you could argue, oh, all the hair cells are shorter, and so they're not working the same way. Every paper has its problems. but that's what these graphs mean. The hair cells at rest are actually shorter in the knock-out. So just some caveats. I think the main message is prestin is essential for the cochlear amplifier. And without it, you have a big hearing loss, 40 to 60 dB. Now, one of the other things they measured in here is something called the distortion product otoacoustic emission. And let me just, as the last thing in today's lecture, tell you about what an otoacoustic emission is. These were discovered about the same time as outer hair cell electromotility by David Kemp, who's at University College London. And he was doing some kind of hearing tests in people, and he developed a very, very sensitive microphone that had a very low electrical noise. He stuck that microphone in an ear canal. And what did he find? The microphone actually picked up sound in the ear canal. Oh my gosh, this is crazy. There's not supposed to be sound coming out of the ear, you're supposed to be putting sound into the ear, right? So he named it otoacoustic emission. OK, "oto" means ear. "Acoustic" means sound, and "emissions" means coming out, sound coming out of the ear. This was an amazing discovery, and it fits very nicely with the idea that there's something in the ear that's actually moving, that being the outer hair cells. The outer hair cells are moving either spontaneously-- and there are some otoacoustic emissions that are spontaneous. About half of us have spontaneous otoacoustic emissions in our ears. Now, before you get too excited and go home and listen to them, they are very, very low levels of sound. Most of them are below the audio metric hearing curve for human hearing. So you really, in most cases, are not aware of your otoacoustic emissions. This is very different from the sensation that some of us have of ringing in the ears, tinnitus. Does anybody have tinnitus? I have tinnitus, especially my left ear. If I close my left ear, often I can hear kind of a noise. Put my head on the pillow at night, I can hear a little noise in there. So that's a sensation that I have even though there's no sound going in my ear, and it's not an otoacoustic emission. You could put a microphone in that ear, and there is no sound there. Something in my brain is telling me that I am hearing a sound. Some people are very disturbed by tinnitus. There's no good treatment for it. Historically, the famous treatment was by an ear surgeon who said, OK, I'll cure your tinnitus. And he took out the person's ear, the cochlea was taken out. Tinnitus didn't change one bit. Maybe it's like phantom limb pain, something to do with your central nervous system. Tinnitus and otoacoustic emissions are completely different. Otoacoustic emissions are associated with normal hearing, normal outer hair cell function. They're sometimes used as a clinical test for hearing in patients who can't raise their arm. For example, most states, like Massachusetts, you have to, by law, test newborns for good hearing. An otoacoustic emission test is one. So how does that work if only 50% of the people have them? Well, there are other types of otoacoustic emissions that are evoked-- that is, you put sound in, and you listen for the sound coming back out. Some of these are transiently evoked. You put a click in, and a few milliseconds later you get a sound coming back out. And that's the usual clinical test. And 100% of normal hearing humans have these so-called otoacoustic emissions. Now, what would be an indication if you had a patient with no otoacoustic emissions? Well, it is a good test of whether your middle ear and inner ear is working as far along the pathway as the outer hair cells. Beyond that, it doesn't test. It just tests up to the electromotile part of the hearing organ. So it doesn't test the inner hair cells. It doesn't test the auditory nerve fibers. But much of hearing problem arises in the outer hair cells, so it's a pretty good first step. It's a very easy test to do. And I think when we have the lab tour over at the Mass Eye and Ear Infirmary at the end of the semester, we'll be seeing some otoacoustic emissions recorded from a human. There's a project going on there now. So we'll have a demo of that. OK, so if there aren't any questions, we'll meet up again on Monday.
MIT_904_Sensory_Systems_Fall_2013
21_Sound_localization_2_Superior_olivary_complex_and_IC.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Last time we talked about processing of cues that are used for binaural localization of sound. Those being the interaural time and interaural level differences. And we have those cues because the ears are physically separated on the sides of our head. And because of the physical characteristics of sound, for example, the velocity of sound in air. And we talked toward the end of last time's lecture about the neural processing of interaural time differences in the Medial Superior Olive, or MSO. And we talked about the Jeffress model and how it can help recreate a neural mapping that has ITD sensitivity. And that's the subject of the assignment for this year. So I just put the text for the assignment up here, just to mention that there was something added on at the end. But the first paragraph of the assignment is basically the lecture and the sketch of the Jeffress model. Except that these days, people are thinking that the axonal delay lines are not how the delay is created in the medial superior olive. And so I ask you for two other possible neural mechanisms that weren't originally postulated by Jeffress that could create delays. So that's sort of the heart of the assignment. The second paragraph talks about some further updates to the Jeffress model. And there is a paper that is attached to last time's lecture. The paper by [? Brandt ?] et al where they discuss extensively some challenges, they call them, to the Jeffress model. And I might call them amendments to the Jeffress model, but things that weren't originally postulated by Jeffress that have come to light because of more recent experimental studies that don't fit with the original version and their updates. And finally, we've been talking quite a lot about cochlear implants. And there's a very nice passage on cochlear implants in the textbook that I encourage you to read. And the final paragraph of the assignment is, what problems would cochlear implant users, even those with a left and a right cochlear implant, have if they use the Jeffress model to localize sounds? So that's a little bit of thinking to solve that last problem, too. But based on what we have talked about and what the textbook talks about for cochlear implants, you should be able to come up with an answer for that. So any questions on the assignment? It relates very heavily to last time's lecture on the Jeffress model and the MSO. And I guess-- it doesn't say here, but three to five pages would be appropriate, I think. And it's due on December 4, which is the day of the lab tour. So on that class day, which is a week from Wednesday, we'll meet at the Massachusetts Eye and Ear Infirmary instead of meeting here. So we'll send you a reminder. And I think now on the website there are directions to get to Mass Eye and Ear Infirmary. It's just a simple one-stop on the Red Line subway to get there. So the assignment will be due then. And then I'll be able to grade it before we have a review session. And we'll talk about the correct answers for this assignment at the time of the review session, which is the next class after December 4. So today, I want to march into new things. Now, we're going to talk about neural processing of interaural level differences. Remember if a sound is off to one side of my head, it's going to appear at a higher sound level in the ear facing that sound source than it will appear in the ear away from the sound source. So we talked about how big these cues were last time. And they can, for high frequencies, be maximally 20 dB in Interaural Level Difference, or ILD. So those are processed, at least starting in the lateral superior olive, which is another nucleus in the superior olivary complex, which is close to the medial superior olive we've been talking about. So we'll talk about the process in the LSO. Then, we'll talk about projections of that nucleus and the MSO. And other parts of the superior olive to the next higher center in the brainstem, which in the auditory pathway is the Inferior Colliculus, or IC. So we'll be talking extensively about the inferior colliculus, which is a large nucleus in the brainstem just caudal to the superior colliculus that you folks talked about extensively in the visual part of the course. In the IC, you have some interesting neural responses that relate to something called time/intensity trading. I misspelled it. Not a very good speller, sorry. Which we'll talk about and define. We'll talk about some interesting characteristics of room acoustics, like reflections off the wall, and how those don't completely upset the apple cart of knowing where a sound is coming from because of things like the precedence effect. And finally, we'll end up with auditory pathways in the barn owl. So this species of birds has very well-developed auditory systems. And some work at Caltech has shown that these animals have so-called "space" maps in a certain part of their brain. That is, there's a mapping of where the sound is in external space into space in part of their brain called the optic tectum. And we'll go over that. And the reading for today is on how that space map is plastic and can be changed by experience. So we'll talk about neuronal plasticity of the space map. OK, so let's get started. So the neural processing of the interaural level differences in the lateral superior olive. By contrast to what Jeffress cooked up for the MSO, this neural circuit is very simple. And here's how it runs. The Lateral Superior Olive on the left side is here, the LSO. There's an LSO on the right side as well, but the circuit is shown for the one on the left side. This LSO on the left side gets input from the cochlear nucleus on the left side. That's excitatory input. And it gets input from the right side that's inhibitory. And the way the inhibitory input works is the cochlear nucleus neurons on the right side project across the midline and into another sub-nucleus of the superior olivary complex designated here as the MNTB, which I think is spelled out here-- the Medial Nucleus of the Trapezoid Body. So if you look at these sections under the microscope, there's a lot of crossing fibers here. And they sort of look, if you have a lot of imagination, like a trapezoid. And this nucleus is within those crossing fibers. So it's within the trapezoid body. That's how it gets its name. And the MNTB neurons have inhibitory neurotransmitter. So the cochlear nucleus excites these MNTB neurons, but then they're inhibitory and they send their axons to the LSO. And they spill out or release their inhibitory neurotransmitter on to the LSO neurons. So how does this circuit work then if the sound is off to this side of the slide, off to the left side? The sound will be of higher level in the left ear and it will exert a high excitatory effect on the cochlear nucleus here. And the LSO will be excited in a big way. Of course, a little bit of that sound is going to come over here to the right side, but it won't excite the right auditory nerve quite as much. And this pathway that eventually becomes inhibitory won't be as strongly activated. And so the inhibition will be less. So there's an interplay between excitation here and inhibition. And in this case with the sound off to the left side, the excitation will rule. And so here's a plot of the firing of an individual LSO neuron that gets all these inputs. And if the sound is off to the left side, which is supposed to be this axis-- this is an axis of Interaural Level Difference, or ILD, where here on the left side of the graph the ipsilateral level is greater than the contralateral level. That case, the LSO neuron will be excited and it'll have a lot of response, a high amount of firing. On the other hand, if the sound is over here on the right side, it will activate in a big way the right pathway, which will result in a big inhibitory input to the LSO neuron on that left side of the brain. Sure, some of the sound is going to come over here and activate the left pathway. But that excitation won't be as strong. So in that case, the inhibition will rule. The LSO neuron gets a large amount of inhibition and its firing rate will be low. Its response will be low because it's inhibited. Now, what happens if the sound is right in the middle, 0 ILD? Sound is coming straight ahead. The sound is the same at the two ears. And thus, the ILD is 0. Well, it sort of depends on how you wire this up. Whether the balance is perfectly balanced. Then, maybe it would be a 50% response. In the case of this graph, it looks like the inhibition is a little bit stronger for equal sound on the two sides. And so the inhibition dominates at 0 ILD. But in actuality, if you record from the LSO neurons, you find all sorts of combinations. Those that have 50% response rate at 0, those that have 90% response, and those that have 10% like this guy. So this is a very simple circuit. No coincidence detection. No delay lines. Well, you should kind of do a heads up here when I talk about delay and timing because this pathway coming from this contralateral side is a lot longer. The axons have to cross the midline. And then there's a delay here because at the synapse between one axon's terminal and the cell bodies of the MNTB neurons, there's a little bit of delay. This neurotransmitter has to be released. The MNTB neurons have to get excited and, finally, fire. So all that can take a half a millisecond or so. It turns out this axon is a very thick one. This cochlear nucleus neuron here that provides this crossing axon is called the globular bushy cell. And we talked about that a little bit when we talked about the cochlear nucleus. It's not important exactly what type it is, but this has the thickest axon, really, of all the axons in the auditory pathway. So it gets across the midline very quickly. And there is a synaptic delay here. So the contralateral input is going to come in a little bit later. So sometimes in recording some LSO neurons, you find a little bit of excitation from this side. And then right after, an instant later, a half a millisecond or a millisecond later, you find a big inhibition. And so that can happen. The sum total though, in the case where the sound is off to the contralateral side, is a dominant inhibition. Now, we haven't talked about neural inhibition much in our class, so maybe we should just mention it. And I should definitely mention the type of inhibitory transmitter that's used here. So we have the MNTB neurons are coming like this and sending their axons to the LSO neurons. And they're inhibiting them. That's what I mean by this minus sign here. And so this is an inhibitory synapse which inhibits the LSO neurons. One can ask the question is, what is the inhibitory neurotransmitter? And in this case, it's called glycine. So glycine is released from the MNTB neuron terminals onto the LSO neurons. And so how is that known? Well, it's sort of a lot of little pieces in a puzzle here. The MNTB neurons themselves have the metabolic machinery that can make the glycine. They transport it down their nerve axons. Glycine is actually a fairly common chemical in all cells of the body, but these MNTB neurons are packed with it. If you use antibodies to glycine, they stain these neurons much darker than most neurons in the brain. There are other glycinergic neurons, of course. Their axons and their terminals are darkly stained. The lateral superior olive neurons have glycine receptors on them. When you put little puffs of glycine, which you can from a pipette. You can release glycine from a pipette in an artificial recording situation. When you puff glycine on to LSO neurons, they are inhibited greatly. So they certainly have the receptors for glycine. There are uptake systems to take up the glycine after it's been released so that the inhibition doesn't stay on forever. When you stimulate, if you go in and stimulate electrically these MNTB neurons, you find the LSO neurons are inhibited. So a lot of little pieces of the puzzle go into the idea that these are glycinergic neurons. And this glycinergic input is very important in ILD sensitivity. Now, a lot of textbooks will say the ITD sensitivity is created in the MSO and the ILD sensitivity is created here in the LSO. And we're done with it. But that's probably not true. This is such a simple circuit, you probably have other places in the auditory pathway where ILD sensitivity is also created. For example, you have nuclei in the lateral lemniscus-- the pathway going up to the inferior colliculus. You have such circuits probably right in the inferior colliculus and maybe at other levels of the pathway. So this is not the only place where we find circuits for ILD sensitivity. Now, we talked about last time how these ILD cues were prominent at high frequencies and very-- almost nonexistent at low frequencies. Because low-frequency sound can bend around the head very easily. So we had, I think, for 200 Hertz. Even for a sound source located directly off to the side absolutely 0 ILD. For 6,000 Hertz, we have a huge ILD. So ILDs are not very important at low frequencies. If you go in to the LSO and record the frequency responsivity of the neurons there from measurements of their tuning curve. Now, remember what a tuning curve was? Tuning curve, we had sort of over and over. It was a plot of sound frequency. On the y-axis is sound pressure level for a response. And we have the v-shaped functions and we picked off the most sensitive frequency. And that's the CF of the neuron. Neurons in the superior olivary complex have beautiful CFs. And you could do a CF mapping, which is what's done in this study. So in the LSO-- right here, you find CFs from 0 to 1 kilohertz. Right in this part from 4 to 10 kilohertz. In this part here, 20 kilohertz and up. And this is the typical kind of funny s-shape of the LSO that you see in coronal sections in this case of the cat superior olivary complex. If you were to do this mapping in the cochlear nucleus, you'd find a lot of cochlear nucleus devoted to the low CFs, some to the middle, and some to high. In the LSO, you have a lot of the LSO devoted to the high CFs, which is where ILD cues are very prominent. So it makes sense that where you're processing ILDs, you devote a lot of neurons to responding to the frequencies where the cue is very salient. The MNTB, which projects into the LSO, has a similar disproportionately large representation of high CFs. The MSO is just the opposite. There's hardly any MSO devoted to the very highest frequencies. And remember, last time we talked about ITDs being ambiguous at high frequencies because the interaural time difference is still the same, but the sound can go through one or even more complete cycles by the time it gets to the other ear. And so you can't tell what ITD you're working with at these high frequencies. On the other hand, where these ILD cues were weak at low frequencies, the ITDs are strong and salient. And there's a lot of MSO devoted to the low characteristic frequencies where the ITDs are prominent. So that's what this text means, that there's sort of a disproportionate amount of territory in these nuclei based on where the cue is important in the frequency domain. Now, we have a little bit of an issue here with the LSO. This LSO on this side is going to respond to sound sources on this side of the body. And remember in most sensory systems, there's a crossing such that stimuli on the right side of the body evoke neural responses on the left side of the brain. This LSO has sort of got it backwards. It's responding with excitation to sound sources on the right. Well, that's taken care of by virtue of the ascending projections from the LSO to the next higher center, which is the Inferior Colliculus, or IC. And that's diagrammed here in this next slide. And this shows the projections of the LSO. Again, the LSO on the left side projecting across the midline to the inferior colliculus on the right side. And that projection would then predict that if a sound source was over on this right side and exciting the LSO neuron, that message would then get in a big way to the inferior colliculus on the left side. So that inferior colliculus would then respond to sounds on the right side of the body. For a while, this field was a little mystified because there's also a projection from the left LSO to the left IC. It ended up being an inhibitory projection. So this projection here that stays on the same side is mostly inhibitory. It's not exactly clear what that does, but it's there. You can sort of discount it in terms of the mapping of stimuli on one side of the body to responses on the other side of the body. The MSO doesn't have such a problem, I'll just say in passing. The MSO, just because of its ITD map-- if you go back and review last week's lecture, you'll see that the MSO is already mapping ITDs for sound sources on the opposite side of the body. Now, there's been a lot of work, especially in the early days of auditory neural science, on looking at the effect of lesions. And lesions are a little bit hard to do in a complex as hard as the superior olivary complex. Because if you go in and try to destroy the LSO, invariably right next door is the MSO on one side. And right next door is the MNTB. It's very hard to make selective lesions in the superior olive. When you get to the inferior colliculus-- oh, there's a big nucleus. And you can go in and you can destroy it in an experimental animal on just one side. When that is done, and the animal is trained in a task to localize where the sound is coming from, it's very clear that a lesion of the inferior colliculus on the right side makes the animal unable to localize sounds on the opposite side of the body. So lesion in the right inferior colliculus, the animal can't localize sounds on the opposite side. The animal still can localize sounds on the ipsilateral hemi field, in the side that you didn't lesion on because the other colliculus that's still there that's not lesioned can perform the task. If you lesion both inferior colliculi, the animal can't localize sounds anywhere. But it's very clear that a lesion on one side of the auditory pathway here makes the animal unable to localize sounds in the opposite hemi field. So that's a theme for sensory processing that stimuli on one side of the body are mapped to neural nuclei on the opposite side of the brain. And that's very clear in the auditory pathway. Now, I think last time when we had the demonstrations of ITDs and ILDs in headphones where we could present one and not the other, someone said, well, it sounded a little bit like when we had an ITD when the sound was on the left side. And then later, we had an ILD with sound on the left side. They sounded a little bit the same, as if the sound could be put on one side by either ITD and ILD. And that was an interesting comment because of the phenomenon called time intensity trading, which you can do with headphones. I won't demonstrate it because it's so clear to explain it. You can have someone listen in headphones and you can make-- let's see. An ITD such that when that's presented alone, the sound is perceived as if it's coming from the left side. You can also present with those headphones the same sound, but make now in this case an ILD. So the sound is higher on the right side so that it sounds like it's coming from the right side. So now we have the time cues making you think it's coming from the left. The intensity cues making you think it's coming from the right. When you put those two things together, which you can do artificially with headphones, you find in some cases that the sound sounds like it's coming from straight ahead. And this is called trading for time and intensity, or time/intensity trading. And you can balance one with the other. If you do a lot of sound level on one side and just a little time on the other side, it sounds like it's off to the right. If you do them equal, it sounds like it's straight ahead. If you do a lot of time, a big ITD and a little ILD, it sounds like it's a little off to the left. It's a very clear psychophysical phenomena. Where do you find neural responses that correlate with time/ intensity trading? Well, you find it in places like the inferior colliculus, where inputs from the LSO and the MSO first converge. So both LSO and MSO project up into the inferior colliculus. And here, you have the kind of responses that are shown on this graph. This x-axis is now a time axis, except it's ITD. One side is the inhibitory lead. The sound is delayed so that the ITD is from the opposite ear. This is the side so that the time is delayed from the ear on the same side as the inferior colliculus you're recording from. And clearly, this neuron is sensitive to that ITD if you vary it. This is the percent neural response. 100% would be up here, 0 response would be up here. Now, what's varied as the parameter here is the interaural level difference. So in this case, the contralateral ear is-- it looks like 35. And the ipsilateral ear is 45. They've kept the contralateral ear the same for the most part. And the ipsilateral ear level has changed. And clearly, changing the level also has a big effect on the neural response. So here, for the first time in the auditory pathway, where we're finding ITD and ILD responses together in single neurons-- in a big way. You find them in a small way in the MSO and the LSO. But this is a huge effect here. And probably, there is the first place where you might have neural correlates of your perception for time/intensity trading. So that's one phenomenon I want to cover that probably has a neural correlate at the level of the inferior colliculus. And here's another one. We've been dealing with very simple stimuli that have just, say, one ITD or one ILD. When you get into a room, it becomes much more complicated because of echoes off the walls and ceiling and floor of the room. And there's some very interesting experiments that are done with more natural stimuli like you find in rooms. And we'll go over just a few of them. So here-- and this is an observer listening to a sound source off to the observer's left. And the direct sound is indicated by A here. And most of these data deal with the interaural time difference because this is a fairly low frequency. This interaural time difference of the direct sound indicated by this big arrow in the air here favors the left ear. It's going to be arriving at the left ear first and the right ear a little bit later. So if you just had that sound, the subject, obviously, would report that the sound is coming from the left side. And this is a plot of the sound for the left and right ear if you just had this arrow in A here. And in green, it's the interaural time difference for just that direct sound. So you can see it's a fairly stable ITD and it's a negative ITD just by convention. Left ear is going to be negative here, ITD. And this is an ITD in milliseconds of 0.4 milliseconds. And remember, we said if it was directly off to the left side, it would be about 0.6. If it were straight ahead, it would be 0. So what's the sound stimulus here? Well, this is, again, changing things a little bit. We've been talking about very simple pure tone stimuli, or clicks. This paper is from a German group. And French or German speakers have this lovely speech sound, which is called like a trill or rolled R. And I, for the life of me, cannot do this stimulus. But I'll try. It's something like [ROLLING R]. As if you were to pronounce the word in German "reich." Or in French, [INAUDIBLE]. It's impossible for me to do that because I don't speak those languages. But anyway, here's the sound stimulus with all the echoes added in. And one of these traits just shows the left ear input and the right ear input. And these peaks here are the trills of the R. You can see them of just the left ear input as considered. So it's a trill of the R. And you can see how many milliseconds happened between each of those parts of the trill. Maybe like 40 milliseconds or so. Now, when the subject is in a normal room, a lot more happens than just the direct sound. Some of the sound comes from the source, the R here, and bounces off the wall to the subject's right. Some of the sound goes beyond the subject's head and bounces off the wall to the right, and then the wall behind the subject and comes back there. And this drawing here is when you take into account several of these reflections in addition to the direct sound. And the overall ITD is still plotted in green. And look what a mess it is. The ITD is all over the place. It starts out like it should from just the direct sound. It's negative here. But then real quickly, it goes past 0. And it goes way up here and it bounces around, then it goes back down again. It's all over the map. So where do you think that subject says the sound is coming from? Do you think the subject says, I can't tell, it's all over the place? Well, you've been in rooms and you've listened to speakers in rooms. You're hearing me right now and there's a lot of sound coming off the walls to get into your ears on both sides. If you add up the reflections from the walls to the side and beyond, it's a lot greater than the direct sound in terms of the total energy. But you can close your eyes and you can know that I'm standing up here to this side of you. You don't have any trouble with that. And if you just do a very careful experiment with this subject, the direct sound, and a couple of reflections like here. And you have the subject with a pointer. That subject will say the speaker is over there. It's on the left side. It's not behind or to the wrong side. So how do we do that? Well, there is something called the precedence effect, which is very important here. Which helps you in a situation where you have lots of reflections. And what does precedence mean? Well, precedence means something is dominating or something is the most important. And in these cases, if you look very carefully at the complex sound, when you add up all the reflections with the direct sound, you can see, if you have really good eyesight, that right at the beginning of the sort of burst of energy, the first thing to get to the subject ears is coming from the left side. So the left ear input here right at the beginning of these trills starts out. There's a bigger left ear input. And that's why you have a negative ITD right at the beginning of the trill before it starts going crazy. It turns out that the most important part of this signal for localizing where the sound comes from is in the initial few milliseconds. So subjects bias their impression of where the sound is coming from right at the beginning of the sound. And they tend to suppress or ignore all the remaining parts of the sound, at least for localization purposes. So what takes precedence here is the very initial part of the sound signal. And that's what the precedence effect is. It's sometimes called by a different name when you're dealing with speech-- the Haas effect. But the precedence effect is more general for any type of sound. And it really means that when you're in a complex environment with all sorts of reflections, you pay attention to the very first, or initial, sound and ignore the others. So the precedence effect can be studied carefully by just narrowing it down to a direct sound and one reflection by this kind of setup here. So here's a subject listening to two speakers in an anechoic room. And I think we've talked about anechoic rooms before. Anechoic rooms have these baffles on the walls and the floor. This person is seated in a chair in a mesh, so he doesn't fall into the baffles below him, which are also absorbing sound on the floor of the room. There are baffles in the ceiling as well. So whatever sound this is presented goes right to the subject. If it goes beyond him, it goes to this wall over here and is completely absorbed. So there are no reflections. There's just the direct sound. So you can say, well, one of these is the direct sound. And a little bit later, I'm going to introduce a second sound coming from somewhere else, which is the echo. So this might simulate a wall on this side of the subject that reflects. In this case, there's no reflection. But you can say this is an artificial echo presented by this second loudspeaker. What happens if we change the timing between those two sounds? And that's what's plotted here. This is the delay between the first sound and the second sound in milliseconds. Now, for ITDS and sound localization, remember we've been talking about ITDs way down here from 0 to about 0.6 milliseconds. That's where you have sound localization way down here. And when you do a delay that's that short, it sounds to a subject like there's just one sound. If the ITD is at exactly 0, this subject will perceive the sound source being directly in between these two speakers. As the lagging sound gets greater and greater, the subject will start to perceive that the source goes to the original or first sound emitted. And when the delay comes to the maximal delay for the size of the human head, which is about 0.6 milliseconds right here, the subject will perceive that it's only coming from that initial loudspeaker. Then, as delays get further and further on, up between about 0.6 milliseconds and a little over 10 milliseconds, the subject will still say, I still hear one sound. And it's still coming from that speaker. But now it's starting to sound a little bit different. In fact, it doesn't sound as dead anymore. It sounds like I'm in a pretty live room. So what's a live room? A room that's reverberant, like a church or a cathedral. It sounds roomy. It sounds like there's volume. Over here, this is the region called the precedence effect, where you ignore that lagging sound. And it still sounds like there's just one sound, but it sounds different. This is the region of the precedence effect from 0.6 to maybe 10 milliseconds or so. Then, as the delay becomes longer and longer, you start to perceive two sounds. You hear, let's say, a click from the first speaker and a second click a little bit later from the second speaker. And now, the delay is long enough so you actually hear an echo. You hear two sounds, an initial sound and an echo. So that's the perception. And there's this big region here called the precedence effect. So I have a demonstration, if you don't believe me, about echoes. And the demonstration is really vivid, I think. The echoes become more-- you don't hear the echoes here because it's a precedence effect. Maybe it sounds a little bit more roomy, but you don't hear an echo. But there definitely are echoes there. You may not be able to perceive them. So let me play this demonstration. The demonstration is-- I think, the best part of the demonstration is someone taking a brick and hitting it with a hammer. And that makes a big click, right? Well, after that big sound, that impact sound, there's some echoes. You can't hear them very well except when they do this neat trick on this demonstration, which is to play the sound recording backwards. And then, the echoes start first, and then you hear the hammer hitting the brick. They also have some text on here. And they read the text, and then they play the text backward. To me, that's not so obvious. So they do this demonstration of hitting the brick. They do it in an anechoic room right here first. There's no echoes. Second, they do it in a normal room, like this room where there's some reverberation. But a lot of the reverberation is stopped by the carpet on the floor and the clothes I'm wearing. And the seat cushions absorb some of the echoes. Then finally, they do this demonstration a third time. And they do that backwards, too. Then they do it a third time in a very reverberant room, like a church or a cathedral, where you hit the brick and it just sort of rings for quite a ways. So let's play this demonstration and see if it lives up to my description. [AUDIO PLAYBACK] -[INAUDIBLE] echoes. First, in an anechoic room. Then, in a conference room. Then finally, in a very reverberant space. You will hear a hammer striking a brick followed by an old Scottish [INAUDIBLE]. Playing these sounds backwards focuses our attention on the echoes that occur. From ghoulies and ghosties, and long-leggedy beasties and things that go bump in the night, good lord deliver us. PROFESSOR: OK, that's forward. This is backward. -[SPEAKING BACKWARD] PROFESSOR: OK, now the conference room. -From ghoulies and ghosties, and long-leggedy beasties and things that go bump in the night, good lord deliver us. [SPEAKING BACKWARD] PROFESSOR: All right, now in the reverberant room. -From ghoulies and ghosties, and long-leggedy beasties and things that go bump in the night, good lord deliver us. [SPEAKING BACKWARD] [END AUDIO PLAYBACK] PROFESSOR: All right, so I like especially the sound of the hammer hitting the brick played backwards in the reverberant room because it's going to pssew. And all that pssew leading up to the impact is the echo that you just completely discount because of the precedence effect in the normal hearing. OK, so that brings me up to the special part of my lecture, which is the reading. And this little quotation is by a musician. And of course, musicians love reverberant rooms, like churches or cathedrals, or concert halls, or whatever. So E. Power Biggs, who was the organist at Harvard for a long time, made many famous recordings of organ music said, "An organist will take all the reverberation time he is given, and then ask for a bit more, for ample reverberation is part of organ music itself. Many of Bach's organ works are designed actually to exploit reverberation. Consider the pause that follows the ornamented proclamation that opens the famous 'Toccata in D Minor.' Obviously, this is for the enjoyment of the notes as they remain suspended in the air." So musicians love reverberations. And that's the reason that halls where we appreciate music, like Boston Symphony Hall-- and they have some measurements of reverberation time here for opera houses. 1.3 seconds. OK, that's the time the echoes take to decay. Symphony Hall in Birmingham, 2.4 seconds. St. Paul's Cathedral in London, 13 seconds reverberation time. Now in contrast, when you have theaters for speech, like Shakespeare drama theaters, you don't want all those reverberations. You want, for example, a theater for speech here is quoted at having a reverb time of 0.9 seconds because you don't want all these echoes to interfere with your interpretation of the speech. And average living room, 0.4 seconds. Reverberation time is given for the great outdoors. Anybody guess? 0.0. All right. The outdoors has no walls, ceiling, or floor. Now, why are we talking about this here? Because in the inferior colliculus, you find some neurons that show precedence-like responses. And so here is a recording from an inferior colliculus neuron. And this was a study where they used infants and adults, but these are just the data for the adult. And the stimuli are two sounds. I think they are clicks. The first sound starts at time 10 milliseconds. So this is the time axis. And this is the dot raster display. So each little dot here-- they're a little bit hard to see-- is a neural spike. And there are many, many trials. OK, perhaps 50 or 100 trials. And you can see reliably on all the trials, the neuron responded to the first stimulus. When the second stimulus occurred at 101 millisecond delayed, the neuron also faithfully responded to the second stimulus. But as the delay was shortened less and less, the neuron eventually stopped responding for a delay that's-- in this case, it looks like about 20 or so milliseconds. Well, this is certainly precedence-like behavior. That the neural response to the second stimulus is attenuated. This delay, where this particular neuron starts cutting out, is not exactly where we stop losing the precedence effect in humans. This is from an animal. And this animal is anesthetized, so many of the processes are slowed down by anesthesia. And perhaps the animal is a little cool, which might make these things abnormally long. But certainly, this kind of precedence-like responses are on the way toward explaining the precedence effect at the level of the inferior colliculus. And this is the work of Ruth Litovsky from University of Wisconsin. OK, so now I want to shift gears a little bit and go onto a different species, which is the barn owl. And this is mostly the work of-- originally, Mark Konishi at Caltech. And now, Eric Knudsen at Stanford in California. And why did they choose to study the barn owl? So many of us study mammalian models because we want to know what human hearing is all about. And it's hard to record from human brains, but we want to choose an animal that's like the human. So we use a mammal. Why did they choose the barn owl? A lot of successes in neuroscience have been choosing an animal that's specialized for a certain task. And barn owls are very specialized for hearing. So if you take a barn owl and blindfold the owl and turn a mouse loose in a room. As long as the mouse is making a little bit of sound, the barn owl can quickly fly over to the mouse and catch it and eat it. So the prey of barn owls are mice and insects, like grasshoppers that are down on the floor. And if you've ever seen or watched a barn owl hunting, you can clearly see them at night. They don't come out during the day. But at night they come out. I watched one a lot of evenings when I was in California. The owl would come and sit on a basketball hoop. And it would just perch there. And it would move its head all around. And I didn't really know what it was doing at the time, until later, when I read some of this work and it said that owls' eyes cannot move in their head. The owl's eyes are fixed. So this beautiful control of the eyeball position that we have in mammalian eye control is not present in most birds. And certainly is not present in the owl. So to move eyes, you have to move the head. And of course, you're moving the ears as well. In the barn owl, and in other words, you don't have an external pinna, which we said introduces a lot of help to localizing sounds. But the barn owl is a unique bird in that there is some external stuff going on, which is called the facial rough. And that's formed by feathers on the face of the owl. You can see they're sort of like fan-shaped down here. And over here, they go over the ear canal. There's an ear canal, certainly. They go over the ear canal in this perioral flap. And the opening for sound to get in is below that. And also, above it there's an opening. So the barn owl doesn't have a pinna, but it has some modified feathers on the front of its face. Barn owl feathers are also interesting that they are modified. The feathers on the wings are modified. When this owl took off, when I watched it on that basketball hoop, every now and then it would take off and go down to the basketball court below. And you couldn't hear anything. Owl feathers are specifically designed so even the air going over them when the owl flaps its wings is completely silent because the owls don't want whatever it's hunting to hear them approaching it. So barn owl wing feathers are specifically designed to be acoustically silent. OK, so a lot of work has been done on the barn owl pathway. Now, this is a little bit different because if you've looked at the brains of avian species, they're a little bit different. They evolve differently than mammals. But they have some analogous nuclei. Here are the cochlear nuclei. That's with this little text here. This is supposed to be the owl pathway on the left side and the right side of the brain. This is the midline in dashed lines. The owl cochlear nuclei are split up into two parts. And the one that's featured here is labeled NM. So NM stands for Nucleus Magnocellularis. And so we can all figure out what this means. Cellularlaris means the cell or the nerve cells. Magno means big. These are the big cells. So there's some other parts of the bird cochlear nuclei where the cells are smaller. But this is the big-- big cell part. And in nucleus magnocellularis, you have beautiful phase locking. We talked about that being typical of the mammalian pathway. In the cochlear nucleus, the bushy cells have good phase locking. Maybe even better than the auditory nerve. In the owl, the nucleus magnocellularis neurons have excellent phase locking. And so they're keeping track of the stimulus waveform. The timing is important to them. They project centrally. The one on the left side and the one on the right side converge onto a nucleus that's sensitive to interaural time differences. That's the avian equivalent of the MSO. And it's called NL, Nucleus Laminaris. OK, lamina means sheet. OK, and this is a sheet. It looks like that anyway in the anatomy. And there, the neurons are sensitive to ITDs. And there's a beautiful Jeffress model there. Most of the papers on the mammalian MSO say, we know there's a beautiful Jeffress model in the avian nucleus laminaris. But in the mammal, we're starting to rethink it. This is a beautiful Jeffress model where you find neural responses that are very strongly peaked to ITD. So they fire for a certain ITD. This is the firing rate. But don't fire much at all to other ITDs. They're strongly tuned to ITD. The nucleus laminaris in turn projects across the midline here to the inferior colliculus. And we haven't talked about it, but there are several subregions of the inferior colliculus. The big one is called the ICC. And that's called the Inferior Colliculus Central part, or central nucleus. That's true in mammals as well. That's the big part. Some would call it the core. That's what's indicated here, the core of the inferior colliculus. And it, in turn, projects to other places, like the lateral part. And it finally projects from the lateral part to the ICX. And the ICX is the Inferior Colliculus. And X stands for External. So the external part of the inferior colliculus. And there is where some very interesting responses take place in the barn owl. And we'll look at those responses right now. So these experiments were first done by Mark Konishi in, I believe, the 1970s at Caltech. And the experimental subject is seen. Here is the barn owl right here. And where are his wings? Well, his wings are folded down. And he's in a little, sort of like a tube sock, if you will. His wings are constrained. And the tube sock is mounted on a pole or a pedestal. And he's just sitting there. He can move his head. Actually, in many of these experiments, the head is clamped. But he's sitting there. He's awake. He's listening to the sounds. And the sound is presented by a speaker. The speaker is on a big hoop. You see that hoop? And the speaker is sitting on the hoop. And there's a little motor in the speaker. And the speaker can be moved by driving the motor over here. Or you can move the motor the other direction. You can move the speaker over here or down here or up here, wherever you want to on that hoop the speaker can be driven by the motor. And because the hoop is mounted on two posts on the side, the whole hoop can be swung up or it can be swung down. OK, so you can put that speaker anywhere this way and anywhere up or down that you want to. And so you can put that speaker in the entire-- any position you want to in the entire frontal hemi field of the owl. And I suppose they didn't do that. I suppose you could put it in the rear hemi field as well. But these data are just from the frontal hemi field of the owl. And since the owl's head is mounted and is not moving, you can apply a little local anesthetic and open up the skull. And you can advance an electrode into the ICX, the External Nucleus of the Inferior Colliculus, and make recordings from single neurons there. And in this case, the recordings are made from the ICX on the right side. And what's found in this response plot here is these neurons have restricted receptive fields in space. So what's plotted here is the neuro-responsive field in dashed lines and the most vigorous part in the shaded area there. And this axis, the x-axis, is azimuth. And the y-axis is elevation. And this is a fairly restrictive part of the whole hemi field. And one of these-- this is one response area for one neuron. And a whole bunch of these rectangles are plotted down here with a little diagram of the owl right in the center of this globe. So there's 1, 2, 3, 4, 5, 6, 7, 8, 9, 10-- about a dozen neural receptive fields for ICX neurons in that owl. Notice that they're all fairly discrete. That is, it only responds when the speaker is in a certain part of the frontal field of the owl. Notice also that we're recording from the right side. And most of the receptive fields are off to the left of the owl. They're on the opposite side. So clearly, sound stimuli on one side are mapped to the opposite brain side. That's not true for these three, but they're close to the midline. Most of the receptive fields are not straight ahead. They're actually down below the owl. There's one or two that are just a little bit above, but there are none that are way above. Most of them are down below the owl. Remember, the owl is sitting on the basketball hoop and he's listening for targets down below. This makes a lot of sense that most of the receptive fields are down below the owl, not at the same level of the owl. It doesn't care about a mouse making a slam dunk at the other hoop. It cares about the mouse down on the basketball court. And finally, maybe the most important and most interesting part of these responses is the progression of where the receptive fields are in space versus where they are along the dimensions of the ICX. So that this receptive field was located over here in the ICX. And as you move this way and encountered a different neuron, it was located over here and its receptive field was moved over this way a little. As you move further in that direction, you encountered another receptive field even further in this dimension. And then finally, way over here laterally, you encountered these receptive fields that were way off to the side. So along this dimension of auditory space, receptive fields were found along this dimension of the ICX. There was also a mapping going this way. This is clearly then what some people call a space map. A mapping of auditory space into position within a certain part, the ICX, of the owl's brain. OK, it's a beautiful mapping. Neuroscientists love mappings. You've heard me talk about tonotopic mappings out the wazoo. We love tonotopic mappings because CF is then very important. This clearly is important to have organization of the receptive fields on a dimension in the brain. People have spent many, many years looking for these kind of mappings in the mammalian auditory pathway and not found them. People have looked in the inferior colliculus, in the analogous part, the central part, the external part. It is hard to record from the external part of the mammalian inferior colliculus because it's a small edge structure, but it has not been found. People have looked in the medial geniculate, in the auditory cortex, and looked for organization of spatial receptive fields and not found them. So on nuclei in the main part of the auditory pathway, you do not find space maps in the mammalian system. So the one place you find spatial organization in the mammal is in the mammalian superior colliculus. And you're probably going, huh? I thought you just said the superior colliculus is visual? Well, it is. But it's a layered structure that, if I'm not mistaken, the top three layers are exclusively visual. But if you go down to lower layers, the bottom layers of the superior colliculus, you start to encounter neurons that respond to visual as well as auditory stimuli. And you may have talked about the visual mapping of the superior colliculus. And those neurons in the deep layers that are also responsive to auditory stimuli-- they're mostly ILD sensitive. They're mapped in line with the visual receptive fields. They're also space mapped to a certain extent. Now, that nucleus clearly is not on the main drag of the auditory pathway. The auditory pathway is cochlear nucleus, superior olive, inferior colliculus, medial geniculate and cortex. So you do have a space map in the mammalian deep layers of the superior colliculus, but not on the main parts of the mammalian auditory pathway. So that's been the finding. Now, that's negative evidence. It's not clear that I won't be back here teaching the course next year. And we'll read a paper that says, ah, space map in wherever. But it hasn't been found so far with this one exception. Now, the paper that we read for today's class talks about-- back to the barn owl. A place in the barn owl, which is-- they call it the optic tectum in the birds. But it's analogous to the superior colliculus in mammals. A place where, as I said, you find auditory spacing maps that are in line with visual space maps. And they do a very interesting and elegant experiment where one of those maps is distorted. And you study the resulting effect on the other map. OK, so how did they do that? So this is n owl, but it's a juvenile owl. An owl chick. And it's wearing some interesting things on its eyes. Those aren't its eyes. Those are some prisms that the investigators have put on the owl's eyes. And they deflect the visual field a certain amount depending on the size of the prism. And I can't remember what the deflection was. I seem to remember 30 degrees. So the visual field is deflected 30 degrees. And as I said before, the owl's eyes are fixed in the head. So putting on these goggles, no matter what-- the goggles are going to move if the owl moves its head. So no matter what the owl does, the visual receptive fields of all these neurons, everything in vision is shifted 30 degrees. This is normal. These might be receptive fields from neurons in the brain somewhere. This is when the prisms are added. Here, you've shifted the visual receptive field. The auditory receptive field-- you haven't changed the ears at all. The auditory receptive field is the same. What's found when you do that in juvenile owls, you come back eight weeks later. You find, oh my gosh, the auditory receptive field has actually moved. It shifted. You knew where you ere recording from in the brain. In this case, the recordings are made in the optic tectum, in the superior colliculus. You know the dimensions and you come back and you're expecting to see auditory receptive fields like that, but they've been shifted. So juvenile owls with prism experience, given a number of weeks to compensate, shift the auditory receptive fields so that the two are back in alignment. This group has also shown that if you do this experiment with adult owls, you don't get such shifts. You come back to an adult owl with these same prisms, eight weeks later you have still a mismatch. So plasticity clearly takes place in the juveniles and not in the adults. And they've then likened it to those old folks of us trying to learn a foreign language. It's really tough because we didn't have experience with it while we were juveniles. So neural plasticity and these learning new things-- if this is learning something, it's a bit of a stretch. But it's more difficult to learn things as an adult than it is juveniles. Now, the experiment-- an even further twist in this paper that we read for today, which is now the subject is adult owls. The recordings were made from adult owls. But there are two groups. One is a plain, old control adult owl. The other is an adult owl that when it was a juvenile had experience with the prisms. That experience was long ago, six months before. The prisms were on for eight weeks or so. These shifts took place. The prisms were removed. The owl is allowed to grow up and become an adult with normal vision. Then, take these two groups of owls, put the prisms on again. We've already said that adults don't have the capacity to remap. So many of them just stayed the same. These are the open circles here. The auditory receptive field is the same. This is before the prisms. This is eight weeks after. The open circles are from the adult owls that didn't have any juvenile prism experience. But the adult owls-- they're now adults with the prisms on. The adult owls with the juvenile prism experiences, one of the neurons is recorded here. It has an auditory receptive field that's shifted to make it in line with the shifted visual receptive fields. This is now showing then that adults that have the juvenile experience have some plastic ability to re-map their auditory receptive field so that you have alignment with the visual receptive fields. Clearly, a very interesting experiment. Showing, if you will, a type of maybe memory trace that these owls have retained something that was altered by the juvenile experience. So what could that be? The group has in other studies gone on and looked at the projections between these two boxes here from the central nucleus of the colliculus to the external nucleus. And clearly, shown in juvenile animals that have the prism experiences that the axons that were headed for direct innervation take a little change and they regrow some axonal projections. They've studied these. And it looks like this regrowth of axon between those two areas. And that's what was meant by this little circle that says locus or site of plasticity is manifested by a change in growth of axons right there. And maybe those adult animals that have juvenile experience retain some of those axonal projections that have been changed as a result of the experience. And clearly, it takes many weeks for axons to grow or change their connections. Maybe that's the thing that's much easier to do if you're a juvenile animal and you're reacting to these changed stimuli. Another way to change responses there is to have everything connected to everything, but certain things emphasize certain synapses very strong and the weak ones not emphasize. Maybe when you have the prism experience, the previously de-emphasized synapses become upregulated without any change of axons. But clearly, this group has shown that the axons have changed their growth patterns. So that could be a mechanism for the plasticity. And I think that's all I wanted to say today. So I have five minutes if you guys want to ask questions about anything. Yeah. AUDIENCE: Can you go over how prisms actually change auditory perception again? Like, how are the prisms-- PROFESSOR: Back to precedence? Is that what you're-- AUDIENCE: How are prisms changing-- PROFESSOR: How are prisms? AUDIENCE: Yeah. PROFESSOR: That's not clear. What's clear is that the fields are mismatched. AUDIENCE: So we don't know why that happens? PROFESSOR: That's correct. Yeah. But I presume what is happening to the owl during these eight weeks is the owl is seeing an object and hearing it as well. For example, a mouse down here. The owl sees it and it goes for it. But actually, because its visual fields are off, it goes over here and the object is over here. But the auditory cues, if it paid attention to them, it would go here. So it's sensing a misalignment in experience. There's no training involved here, but the owls are allowed to hunt prey and experience environmental sounds. So they clearly then have a mismatch between vision and audition in these eight weeks. Yeah. AUDIENCE: Does this sort of then, I guess suggest that the auditory input is somehow more important? Because rather than say a visual input shifting [INAUDIBLE]? PROFESSOR: Yes. Yes, you could say that. I mean, it would be interesting to do the converse experiment. Keep the eyeballs normal and somehow distort the auditory receptive fields. So you could do that with ITDs by putting a tube and lengthening the ITD on one side. That would be an interesting sort of counter experiment to get at what you're asking about. It would distort the other cue. OK, great. We'll see you back on Wednesday then.
MIT_904_Sensory_Systems_Fall_2013
18_Hearing_loss_demo_by_Sheila_Xu_cochlear_implant_user.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, I guess we'll get started. Last time, we were talking about the auditory pathway in the brain, the central auditory pathway, starting with the cochlear nucleus and going up through the various brain stem, the thalamic and cortical auditory areas. And then we focused mainly on the cochlear nucleus, which is the very first of those many auditory central nuclei. And we talked about the diversity of cell types or neuron types in the cochlear nucleus and the diversity of response types when you monitor the responses of single neurons to sound. And we did some attempts at correlation between the two. And those are firmly established in the cochlear nucleus, much better than anywhere else in the auditory pathways certainly. So any questions from last time? So today's lecture is on hearing loss and implants that restore our sense of hearing if we happen to be deaf. And I've written a little summary of what I want to cover today on the board. So we'll start out with the first 2/3 of the lecture being on hearing loss. And we've mentioned a little about the conductive apparatus, the eardrum, the three ossicles in the middle ear, conveying the vibrations to the inner ear. And I think we had an example of one type of conductive hearing loss. If you have, obviously, an interruption of that ossicular chain, then the vibrations are going to be reduced in the inner ear in the conduction pah-- because the conduction path is interrupted. So those are relatively straightforward concepts, and so consider those covered. Today, I want to talk about the, perhaps, more common types of hearing loss that are grouped under the name sensorineural because the sensory cells, or the nerve fibers themselves, are damaged. And in that case, it's not so easy to understand how we might correct them by putting in an artificial middle ear ossicle or something like that. This is a bit of a misnomer in that perhaps 99% of hearing loss and deafness of this type was really based on the sensory cells, so the hair cells are the prime culprit in people who have sensorineural hearing loss. The most vulnerable, of the two types of hair cells we've been talking about, are the outer hair cells. Any of the various causes that we'll talk about that damage our hearing affect the outer hair cells to a much greater degree than the inner hair cells, and the reason for that is not known. For some reason, the outer hair cells are more vulnerable. As we'll see in the very first slide of today's lecture, those hair cells in the basal turn of the cochlea are more vulnerable than those in more a apical regions. Reason for that is not known either. It's a very interesting phenomenon with no basis that we know about. We'll talk about permanent and temporary hearing loss, the various causes of hearing loss, and then, at the end, we'll talk about the various neural prostheses, or implants, that are used to restore hearing. And the most famous of those, of course, is the cochlear implant. We hope to have a visit from a subject who's deaf, who uses a cochlear implant, and she'll be able to demonstrate her implant to you and answer questions if you want to ask them of her about her cochlear implant. We'll also cover a couple other different types of implants that are used to restore hearing. So this first slide talks about sensorineural hearing loss in general. And a very common pattern of sensorineural hearing loss, which comes from the basal turn being most affected. This is an audiogram, if you will, a graph of hearing level in terms of sound pressure level, hearing threshold as a function of sound frequency for, in this lower curve, a normal hearing human and, in this upper curve, a typical pattern for someone who has a mild to moderate sensorineural hearing loss. And so as you can see, this individual with the hearing loss has perfectly normal hearing thresholds up to the middle frequency, 1,000 Hertz, but then their threshold of hearing deviates from the normal so that by about 10,000 Hertz, they have a hearing loss of 60 dB or so. This is a very common pattern of hearing loss that arises because, for some reason, the basal turn is more affected. The basal turn is where you have the responses to the very highest sound frequencies. This person will come into the Massachussetts Eye and Ear Infirmary, for example, and complain to the doctors and audiologists there when the hearing loss becomes noticeable, when they have a problem understanding speech. Hearing loss is intimately entwined with our perception and understanding of speech. And so when people have problems understanding speech, they often seek medical advice. Now, the most important frequencies for those in speech are between about 300 and 4,000 Hertz. So you can see this person's hearing loss is clearly getting into the speech range, and they may have problems discerning the more high frequency parts of speech. So what are those? So typically vowels, which have the formants that we talk about, have very low frequency, so something like ahhh and oooh. They are very low frequencies. But I think if you could read this diagram a little better, you'd understand that high-pitched sounds like the "sss" sound of an s, or something that has an abrupt onset, like a "t", has a lot of high frequencies in that sound. And so those are going to be the first types of speech sounds that are hard to understand for the person with the impaired graph on the top slide there. Now at first, you might say, well, what we should do is get a hearing aid that amplifies the frequencies that are in loss area OK. So to do that, you'd have to have a pretty sophisticated hearing aid. You'd have to, for each sound frequency, dial in the exact amount of amplification. And hearing aids are very good these days, and there are hearing aids that can be used on a frequency specific manner, that is don't amplify anything at low frequencies and amplify exactly the amount of loss at high frequencies. So at first, it sounds like a good idea, but we'll get into the reason that that doesn't always work later on. So that simple solution, just install a hearing aid-- a hearing aid, which everybody has seen one probably in older people-- is simply an amplifier. It has a microphone. It picks up sound. It boosts the sound in whatever frequency ranges the audiologist programs. And then it has a little speaker and it speaks or plays the boosted sound into the ear canal of the person. So it's just an amplifier. So you can have hearing aids that work very well and their frequency tailored. And they especially work very well for the type of hearing loss that's called the conductive hearing loss because, simply, the problem is getting the sound into the inner ear, and amplifying the sound, in a person with a conductive hearing loss, works very well. It doesn't work so well in sensorineural hearing loss for reasons we'll get into in a little bit. Now how do these hearing losses happen? There are a variety of causes that can damage your hearing. We all have fun with sounds, and we tend to have a lot of fun when the sounds are very intense. And these are so-- this is an old transparency obviously. But this is a graph of sound pressure level here. Remember the thresholds of hearing are way down here. And these are some example sounds that have very high level, and most of these are damaging, at least if you listen to them long enough. Obviously, gunshots, firecrackers are very damaging. Those sounds are in excess of 120 dB. So a single gunshot, if it's close to your head, can be damaging. So we had-- we're going to have an example of that in just a minute. Some of these sounds are more moderate, around the region of 100 dB SPL, for example, a chainsaw, a leaf blower, the symphony orchestra here. So everybody goes to the symphony, right? So obviously, these things depend on how close you are to the object that's generating the sound, right? So if you go to the Boston Symphony, you're not going to endure a hearing loss. But if you have good seats and are looking down on the symphony, you'll see that a lot of the woodwind players who are sitting right in front of the brass, for example, the trumpet players, they have a little screen behind them, a plexiglass screen that's pretty invisible unless you're looking for it. That causes a sound shadow, and so it protects their ears from the blast of the bras. And I've also been in the symphony where, sometimes, the woodwind players would actually put in ear plugs when there's a big brass solo, and brass is blowing like crazy. And then after that big solo, they take them out, and they play their own little solo. So professional musicians are obviously very worried about their hearing. And it can be, if you're close to a trumpet or a brass instrument, deafening-- or in front of a big timpani or snare drums-- of course, these things depend on how long you listen to them, so the damage is cumulative. It may take many years of exposure at 90 dB to produce a hearing loss even though exposure to a really high sound level, like the 160 dB, may give you hearing loss after just a single exposure. So legally, employers are supposed to provide hearing protection for their workers if you send a worker in to an 85 dB sound pressure level environment, like is common in a factory, you are supposed to provide the workers with hearing protection if it's an 8 hour shift. If it's only a 4 hour shift, you don't have to. If the sound level is 95 dB, it's something like 2 hours. If it's 100 dB, you can expose someone to an hour of that without hearing protection, but if it's longer than that, you have to provide hearing protection. So here's some example. Movie theaters, Godzilla is 118 dB because it's a terrible roar. It can be deafening if you are right near the loudspeaker. And if you go to Godzilla 100 times. OK? All right. So loud sound is one of the causes of hearing loss, so let's just make a little list here. High level sound is certainly one of the causes of hearing loss, and I think we have some examples here. So this is an example from some research that was done by one of the professors I had in graduate school, Joe Hawkins. And he studied temporal bones where the cochlear is in humans. So he would get temporal bones after a subject had passed away and had donated their body to science. And they were useful if he knew something about the individual, like if they had their hearing tested or if you knew a little bit about what activities they liked. These particular data are from a human who is an active hunter, fired a gun a lot. And the specimens shown in the photomicrographs here are looking down onto the surface of the inner ear, or cochlea, on the left side and the right side of the subject. The bone that's on the snail shell, or cochlea, has been thinned away with a dental drill, and you can see very nicely the basal turn. The apical turn, you can't really see very well from that, but you can thin the apical turn as well. Sometimes. It's cut off and thinned in a different dish. But anyway, what you're looking at here-- I should get out my pointer, so I point a little better. So this is the very basal end of the cochlea and spiraling up. And the human has about 2 and 1/2 or 3 complete turns of the cochlea. And this white structure here is the organ of Corti sitting on the basilar membrane. This specimen is stained with a stain called osmium, which stains lipids and especially myelinated nerve fibers, so you can see a lot of myelinated nerve fibers. They looks like threads coming out. And here's some more threads up here. And I said the organ of Corti is here, but actually, it's completely gone here on the left side, and you can see it begin right about here and go apically here up into the apical turn. You can see a very little bit of it in the extreme basal part of the cochlea, and that's diagrammed here on this graph. This is the length along the basilar membrane from the base over here on the right to the apex. And this y-axis graphs the percent of the hair cells that are remaining. And in the basal turn, they're almost zero hair cells remaining. They're all gone. Maybe a couple little islands here and there, but it's virtually 100% hair cell loss. And as you go around the upper basal turn, you have most of the hair cells remaining in the case of the solid line, which refers to the inner hair cells. And then you have, in the dashed lines, the three rows of outer hair cells, and there, maybe between 30% and 70% remaining depending exactly where you are. But here again, something has damaged these hair cells, completely wiped them out in the basal half of the cochlea. And wiped a lot of the outer hair cells out and not very many of the inner hair cells are wiped out. Here's the subject's right cochlea. And in this case, you can see-- you don't even need the graph-- you can see that the organ of Corti is pretty intact. Here is a little island of loss, and then another little island of loss, but then you have an intact organ of Corti all the way up to the apex, and that's reflected in the counts here where you have, except at the very basal part of the cochlea which doesn't appear in the micrograph, you have a pretty normal complement of inner hair cells. Outer hair cells are not in such good shape, but they're present throughout the cochlea in this right side. Now, also on here are graphs of the nerve fibers. Those are these little thread-like stained elements here that appear very nicely in this osmium stain, and they're pretty much intact through the cochlea. Maybe in places here where the hair cell loss is really bad, some of the nerve fibers are gone, and that's indicated by this interruption here. But this is another example where you can have whatever damaged this fellow's hair cells, left the nerve fibers relatively intact. And this offers some hope to somebody who wants to install a prosthesis like the cochlear implant and stimulate the remaining nerve fiber just because they're going to stick around even if a lot of the hair cells are gone. So this subject, as I said, was an enthusiastic hunter, and a he was right-handed. And as you can see right here, this is a top view of a person firing a rifle. The left ear of the subject is pointed toward the tip of the gun, and that's where the bullet emerges, and that's where the shock wave of the rifle, when it fires, comes out. This is a modern rifle, not a flintlock rifle where you have a lot of smoke and sound coming out down here. Most of the sound comes at the tip of the gun. And this subject's left ear is pointed right to that and has taken the brunt of the blast in terms of the loss of hair cells. The right ear of the subject is pointed more away from the tip of the gun and is protected and has a pretty normal complement of hair cells. Now, that's not saying that this person didn't go to lots of rock concerts, and didn't take lots of drugs that damage your hearing, and isn't an 80-year-old person, so we're going to add a few things to our list here. There are some drugs, for example aminoglycoside antibiotics-- they are really great antibiotics, but they have this side effect of damaging the hair cells. Three, the aging process damages hearing. And in this kind of a study where you're using a human, you cannot control for these other factors and others that I haven't, but what you can do then is compare left to right. Because presumably, a subject took drugs, and they appeared in both the left and right in your ears. And obviously, the subject had the same aging in the left and right side, so whatever differences there are between the left and the right, we attribute then to the blast from the rifle that the subject shot. So this cause of left right difference would be attributed to the high level sound. Here are some pictures from an experiment animal that has undergone a high sound level, or an overexposure. This is a normal-- I think, in this case, it's a Guinea pig cochlea. And you see the row of inner hair cells here. There is 1, 2, 3, 4, 5, about a dozen inner hair cells. That's just one row of inner hair cells. And then there are three rows of outer hair cells looking down onto the tops of the hair cells where you have the stereocilia sticking up at you. And there are 12 or 15 outer hair cells in each of rows 1, 2, and 3. And it's such a regular pattern, and they're all perfectly there. After listening to the overexposure of sound, there are quite a few inner hair cells lost. Those that are remaining sometimes have abnormal stereocilia. There are number of outer hair cells, in this case, in row one lost. And some that are remaining are indicated by these arrows to have abnormal stereocilia. And here's another example from a different place in the cochlea where almost the entire third row of outer hair cells is wiped out by the overexposure to noise. So what happens when you lose a hair cell? Well, the nearby supporting cells go fill in its space, and they take over. In mammals, such damage is permanent. Once the hair cell is killed, it never grows back. And there's a lot of interest in trying to coax the nearby supporting cells to, in these damage cochleas, become hair cells. But so far that has not been possible. The field was really excited about 20 years ago when this type of damage in a bird cochlea, if left for a month or so, you see reemerging small hair cells. And if you wait long enough, they become full hair cells. In the bird cochlea, the surrounding supporting cells, after damage to the hair cells, can then divide and become new hair cells, in the chicken cochlea, for example. And this was a serendipitous discovery where people were working on damaging chicken hair cells, and they were always waiting a couple days after the exposure to look at the cochleas. And there was a holiday vacation where they exposed the animals before, and they went out of town and came back three weeks later, and they found something must have gone wrong with the exposure because the hair cells are here. They're fine. But they figured out later that, actually, the supporting cells nearby had grown back. So that doesn't seem to help us in the mammalian pathway. There's some sort of growth factor or growth pathway in birds where these hair cells grow back, but not in the mammal unfortunately. So this is an example from a cochlea that's been treated by an aminoglycoside, and this is just to remind me to tell you that, once again, you can count hair cells along the cochlea. This is a plot of hair cells present where lots of black bars means lots of hair cells. And this is a beautiful example of this particular drug treatment, which I believe is kanamycin, and a certain dose doesn't affect the inner hair cells at all. But look at the outer hair cell loss in the basal part of the cochlea, virtually complete outer hair cell loss showing that the outer hair cells are more sensitive, they're more labile to this drug treatment than are the inner hair cells. And once again, the most vulnerable part of the cochlea is not the apex. 0% distance from the apex is up here. And the basal region would be down here, and that's again the most vulnerable part of the cochlear for some reason. We can speculate about why this might be the case for drug treatment. We don't know this, but maybe the drug appears in more in a higher concentration in the basal part of the cochlea. In the cochlea, like you have in the brain, you have a blood-brain barrier. You have a blood-cochlea barrier. Obviously, some drug has gotten into the cochlea, but maybe the blood-cochlea barrier is more permeable down here in the base. And in the apex not as much drug got in. That's an idea. It hasn't been borne out by experimental evidence, but it's an idea that people have in mind. Or, it could be that the outer hair cells are just, for some reason, easier to kill in the base. That's more suggestive that all of these things affect the hair cells in the base more than the apex. Now, these were some of the original experiments that showed what outer hair cells did for us in the sense of hearing. So earlier in this course, we had the effect of knocking out the outer hair cells by knocking out the gene for Prestin. OK? In this case, the outer hair cells are knocked out by the drug treatment. So you've lesioned the outer hair cells in the very basal part of the cochlea. The inner hair cells are present. Let's look at the tuning curves from auditory nerve fibers in that preparation. Now let me remind you again what's happening here. So you have the inner hair cells, and you have the outer hair cells, which have been killed by the drug treatment. And then you have most of the auditory nerve fibers coming from the inner hair cells in the auditory nerve going to the brain. And the experiment then is to if you're recording electrodes, record from the auditory nerve fibers, get a single nerve fiber, and take its tuning curve. And that's what's shown on this top graph. So tuning curves from the normal region of the cochlea are normal shaped. They have sharp tips and tails, normal sensitivity. In the region of the cochlea when the drug treatment has lesioned the outer hair cells, the tuning curves look extremely abnormal. There's a tail, whatever tip there is is a tiny little tip, and there's a tremendous loss of sensitivity, as much as 60 or more dB lost. Basically, these are tipless tuning curves. And now we know that the outer hair cells have their electromotility function. They are the cochlea amplifier. Without the amplifier, you lose the tip on the tuning curve. So that should be a mini review. This is the way the outer hair cells originally thought-- or discovered to be important in the sense of hearing, to provide the normal sensitivity and a sharp tuning. You can get all kinds of tuning curve abnormalities depending on whether you, in this case, lose all the outer hair cells. You cause disarray of the stereocilia. You have partial loss of the outer hair cells. All these kinds of things can be found after noise damage depending on the place of the cochlea you look at, the type of noise, the length of the noise exposure, and the animal. There's a lot of variability in noise damage from exposures to 10-- 10 different animals, you can have 10 different types of loss of hair cells. Noise damage is tremendously variable from subject to subject. Now, we also had-- this is another review. We also had the example of a psychophycial tuning curve. So this is a normal psychophysical tuning curve. Can somebody explain to me what the paradigm is? A psychophysical tuning curve, it's taken from a human listener, right? What's the paradigm? We had this in class, so we should all know what this is. A psychophysical tuning curve, you have a probe tone. I think that, in this case, the probe tone is right at the tip of the arrow. And the subject is instructed to listen to the probe tone and say when you hear the probe tone. Give the probe tone. Yes, I hear that definitely. Give it again. Oh, yes, I hear that. No problem. Then, you add a second tone, maybe a little bit higher in frequency than the probe tone. Probe tone was-- let's say, in this case, 1 kilohertz. The second tone, masker tone is 1.5 kilohertz, let's say. Introduce that. Person, yeah, I still hear the probe tone. I hear this other tone too. Oh, don't pay attention to that. Just listen to the probe tone. Sure, I hear that. Then, you boost the level of that second masker tone up to, in this case, 90 dB. The person says, I can't hear that probe tone anymore. Can you turn it up? And you plot that on your graph. That's a hit. That's a point. In that case, the masker has made inaudible the probe. And you go on varying your frequencies and levels until that masker masks the probe and the person says I can't hear the probe anymore. And you get the so-called psychophysical tuning curve, which has this very nice tip to it and a long low frequency tail from a normal hearing person. But a person with a sensorineural hearing loss often has a psychophysical tuning curve like this. This should remind you of the tuning curves that we just saw from auditory nerve fibers in the damaged cochlea, which is basically a tipless tuning curve. Perhaps in this case, the outer hair cells have been damaged by fun with sounds, and you have just the tail of the tuning curve. Now, here we come to the crux of why, in this person who has a sensorineural hearing loss-- they still have hearing, but they have a big loss-- why won't just a hearing aid work? You can certainly install a hearing aid into this person's ear canal and boost their threshold from what they used to here down at 0 dB to what they now here at 60 dB You can amplify the sound at 60 dB. OK, fine. Then, they'll start to say, yeah, I here it no problem. What happens when this person goes to a crowded restaurant, and there's all this low frequency DIN? Well before, all the low frequency DIN was here. It didn't get into the response area of the sharply tuned tuning curve. Now, you have all this low frequencies that's amplified by the hearing aid. It now gets into the response area of the nerve fiber. That low frequency signal, which you don't want to pay attention to because you're listening at 1 kilohertz, is a competing, or masking, stimulus along with the signal. And so now, the person with the hearing aid and sensorineural hearing loss goes into the crowded restaurant and says I hear very well, but I can't understand the person across the table speaking to me. All I hear is this big noise. And no matter what I-- how I adjust my hearing aid, it just sounds noisy. I can't understand anymore. I can hear. They're certainly not deaf, but they can't understand anymore because before they had sharply tuned frequency tuning, and now they have no frequency tuning at all. It's very broad. That's the problem that a hearing aid can't deal with in terms of restoring normal hearing to a person with sensorineural hearing loss. Before I start to talk about implants, let me just remember to say what other processes affect our hearing. And we have a list just so I don't forget anything. And one of the important things is genetic causes. So maybe you can't see that from the back of the room, but number four here is genetic causes. There are babies who are born deaf, and in the state of Massachusetts, in most states, it's mandatory to test infant hearing at birth because you want to install a hearing aid or install a cochlea implant at a young age if the baby has hearing loss. And another cause that we should list are certain kinds of infections and disease processes. Number five, cause of hearing loss is diseases, for example, meningitis. And one of the MIT students that I used to use for demonstration of cochlea implant is deaf because at age 12, he got very sick with meningitis. And when I asked him, how did you go deaf? He said, well, I got sick with meningitis. And I was so sick that my MD's treated me with aminoglycosides so that they would kill the meningitis bacteria. And he isn't sure whether it's the meningitis or the side effect of the aminoglycosides that made him deaf. But when you woke up, he was cured, but he was deaf. So in some cases you're not sure which of these agents caused the hearing loss. So that's a pretty complete list now. Do we have any questions about what things cause hearing loss? And you might imagine that, during our lifetime, some of these things will be understood in a better way. It's clear why loud sound causes hearing loss. I mean the mechanical action. These things are moving. You could damage the very sensitive apparatus, like the stereocilia. Drugs, aminoglycosides bind to some of the membrane channels in hair cells. And maybe a therapy for this ototoxicity, this hearing loss created by these aminoglycosides, could be to install some competitive binder that would occupy the binding sites while you gave the drug therapy. We don't know at all what causes the hearing loss with aging. That's a very active subject in today's research. Genetic causes, same way, usually these are some sort of developmental factor or protein that's necessary for normal hair cell development and it's lost, in the case of recessive genetic problem. That's pretty clear how that arises. Meningitis, it's not clear how those diseases kill hair cells, but they certainly do. But there's certainly room to imagine that will be worked on quite actively in the next 10 or 20 years. It's not known right now how the hair cells are lost in meningitis. So let's talk about, now, people who have complete hearing loss and are eligible for the so-called cochlear implants and other types of implants that restore hearing. So this is a nice slide from, I think, the paper that we're reading for today. And actually that reminds me, besides that paper, which is a very short one, easy to read, the textbook reading that I've assigned for today, which is most of chapter 8 on auditory prostheses is excellent. It's really up to date. It tells you a lot about cochlear implants and coding for speech, which I probably won't have time to get into. But this is a really-- I mean hearing aids past and present, that's not so important. But it has a lot of good information on cochlear implants, so I'd encourage you definitely to read that textbook passage today. And the research report by Moore and Shannon is a very simple, easy to read paper. It shows you the sites where these various implants go. So the cochlear implant, obviously, is installed into the cochlea, right here. For people who have lost their hearing because of a problem with their auditory nerve, you put a cochlear implant in, and it's not going to do any good because the messages aren't going to be conveyed by the nerve into the brain. And so what's an example of someone like that? Well, a disease process called neurofibromatosis type two, or NF2, is a disease process where the subjects get tumors that grow on various nerves. And a very common type of tumor in NF2 patients is called a vestibular schwannoma. And a schwannoma is a tumor of the schwann cells that normally provide the myelin covering of peripheral nerves. And it grows on the vestibular branch of the eighth cranial nerve. Obviously, that's quite near the auditory branch of the eighth cranial nerve. And these tumors grow and grow. They probably rob the nerve of the blood supply. They probably put pressure on it, and they certainly infiltrate the tumor cells in amongst the fibers. And when the surgeon goes in to remove that type of tumor invariably the eighth cranial nerve is cut. So in that case, the subject has no nerve conveying messages from the cochlea into the brain. Well, the surgery is right here. You're removing a tumor from here, so it's fairly easy to go ahead and install an implant into the cochlear nucleus of the brain. The cochlear nucleus is visible. And that's what's called an auditory brainstem implant. It should be called a cochlear nucleus implant, but it's called an ABI. And an ABI-- I'm not going to talk too much about it-- but just suffice it to say, it's an array of surface electrodes. There are two companies making these. One has 15, and one has 21 in a checkerboard pattern. And the electrodes go onto the surface of the cochlear nucleus, and their placed there during the surgery. There was an experimental penetrating electrode array, or PABI, but that's been discontinued because of side effects. Some of these patients got trigeminal neuralgia, or pain sensations from nearby nerves, maybe by the fact that these electrodes penetrated into the brain. And so that underwent an FDA trial, but that's no longer used. But this surface ABI electrode is used in cases of NF2 or in other cases where the nerve function is compromised. Those implants don't work very well. So if you look at this graph here. This is a graph of the different types of implants, especially I'll call your attention to the cochlear implant and the auditory brain stem implant. In the cochlear implant, you've got a lot of people who can-- if you do in a word recognition test, how often they get the words correct, a lot of them are placing at 100% of the words. So the task here is you stand behind the subject, or the audiologist stand behind the subject, and they say repeat after me, baby. And the person says baby. Sunshine, and the person says sunshine. And the person says, Red Socks. And you say, Cardinals. And they got one wrong. But anyway, you can do these tests without-- it's important to stand behind the person to make sure they're not lipreading. But a lot of cochlear implant users can get 100% on these tests. Now, the ABI, auditory brain stem implants, you've got many of the subject, if not all of them, saying the wrong word or not giving you any response here. So what good is the ABI? The real success story of these prostheses is that the person can understand speech. If the person can't understand speech, this thing isn't doing them too much good. So that's not to say that the ABI isn't successful in certain ways. The ABI is sometimes thought of as a lipreading assist device. So it helps these subjects read lips better. For example, if you guys are deaf and you look at my letters, and I make two different sounds, pa and ba. That looks exactly the same if you're trying to read my lips. But it sounds different to you. You guys have good hearing, and it may sound a little bit different to the ABI user, and it may give that ABI user a little bit of a step up and help versus someone who's just using lipreading. Now, just for completeness, I'll talk about the auditory midbrain implant. The idea here is to put the implant higher up in the pathway. Why would you want to do that? Well, some people think that the ABI doesn't work because there's been this tumor here. And surgeon has been hacking on the tumor to try to get it out, yanking and pulling on it. If the tumor didn't damage the cochlear nucleus, well, the hacking and tugging on it did. And so maybe you should put the implant further up where you haven't been hacking and everything's normal. And so that's the idea behind the auditory midbrain implant, which goes into the inferior colliculus. And there have been five patients who've gone undergone the auditory midbrain implant-- actually six, five very well documented. And the outcomes have been no better than the ABI, but that's because four out of the five well-documented didn't hit the right spot. The inferior colliculus is pretty small, and the part that you really want to go into is the tonotopically organized spot so that this needle electrode y-- this is a long electrode array with about 16 contacts on it, in this needle. And that's put into the tonotopic part of the IC, and it didn't get into the right place in most people. But even in the one individual, got it in the right place, it wasn't any better than the ABI. But there is going to be another clinical trial in which they implant five more subjects. And hopefully, the outcomes will be better on that. So that's the various types of electrodes. And, obviously, the cochlear implant is the real winner here. And we have been having readings-- Hi, Sheila-- we've been having readings in our class, and I'll do a reading now about the cochlear implant. This is from-- this is not made into a book form yet because this is from the esteemed academic publication called Yahoo Finance, on the web. And this is dated September 9, 2013. And the subject of this column is the Lasker Award. So the Lasker Award, does anybody know what the Lasker Award is? Sometimes, called the American Nobel Prize, so it's a very prestigious honor. It's given in several different fields, mostly in medicine and biomedical areas, and so there are sub-groups. And this one was given in clinical medical research award. So the 2013 Lasker Clinical Medical Research Award honors Graeme Clark, Ingeborg Hochmair and Blake Wilson for developing the modern cochlear implant, a device that bestows hearing on profoundly deaf people. The apparatus has, for the first time, substantially restored a human sense with a medical intervention. Blah, blah, blah. Throughout the world today, there are about 320,000 people outfitted with cochlear implants. Most recipients can talk on their cellphones and follow conversations in relatively quiet environments, and an increasing number of patients with severe age-related hearing loss are taking advantage of this marvelous invention. So the three people here, two of them are actually founders of cochlear implant companies. So you can think of Nobel Prizes and these prize being awarded to people who made big discoveries. And certainly, in the third case, Blake Wilson did. But in the first two, it's really conveying a technology to the masses that was recognized by this award. So that's the 2013 Lasker Award. So let's look a little bit about what a cochlear implant is, and that's shown in the next couple of slides. So the cochlear implant has an internal part, which is a series of electrodes that go into the cochlea, and the electrode comes out from here and goes into a so-called internal coil-- sorry about that-- and this is sometimes called the receiver because it gets messages from the external coil, or sometimes called the transmitter, across the skin here. So there's skin between the external and internal coils. On the outside, you have a microphone which picks up the sound and sends the microphone messages to a so-called speech processor. The speech processor sends transforms that sound wave form into a series of electrical pulses that are sent down the electrodes and stimulate the remaining auditory nerve fibers in the cochlea. So the cochlear implant has the electrodes, the internal part, the external part, and the speech processor and microphone. And I have a demonstration cochlear implant here. And I'm going to pass it around. These things are very valuable, so as demonstration models, they strip off the electrodes. So the part I'm passing around is just this tube that goes down here but not the electrodes themselves, and I think it has the internal and external coil, and obviously not the microphone or the speech processor, so just to give you an idea of the size. And I think this one, the tube comes down, and it coils around a little like the electrodes do as they coil in the cochlea. Now, this next slide is pretty important because it shows the electrodes coming into the cochlea in a cutaway diagram. And so the electrodes come in the basal turn of the cochlea. Remember there's an area in the bone that has a little membrane over it called the round window. Surgeons can go in there and make a tear in round window and put the implant in there. Or, they can drill a hole a little bit apical from the round window and start in the base of the cochlea, which is the big part of the cochlea and then thread just by pushing the electrode array more and more apical into the cochlea. Now, the cochlea gets pretty small as it goes very apically. And the electrodes don't fit into the apical region so far. So current cochlear implants only can be pushed in about to cover the basal half of the cochlea, the basal 50%. So that seems like a huge limitation. It's a bit of a limitation. Fortunately, it's not an extreme limitation because the spiral ganglion doesn't go all the way to the apical part of the cochlea. The ganglion is where the cell bodies of the auditory nerve is. And so there is ganglion that ends about 3/4 of the way out, so the last quarter wouldn't be helpful anyway. And here are the various electrodes along the cochlear implant. And modern cochlear implants have 22 electrodes. And they are hooked up. I'll show you how they're hooked up in just a minute. Actually, I'll show you how they're hooked up right now. The way this works is the microphone signal comes into the speech processor here, and the microphone signal is split up into various bands. The microphone might pick up only high frequency, in which case, this band would be active, or it might pick up middle frequencies, in which case these bands would be active, or it might pick up low frequency or it might pick up all frequencies. It depends on what the sound is. The output of those filters is sent to some processing schemes, which eventually result in little electric pulses, and those are shocks that are sent down into the cochlear implant electrodes. And this is supposed to be-- actually something's not happening here automatically. This is supposed to be electrode number one, which is the most apical electrode, and so on and so forth. And this scheme only ends in electrode 18, so this is an old diagram here because current cochlear implants have 22. So if you are hearing very low frequencies, you're going to be stimulating very apical electrodes. And if you're hearing the highest frequencies, you're going to stimulate the most basal electrode. And this is a recapitulation of the place code for sound frequency where base of the cochlear transduces in normal hearing, the high frequencies, and the apex transduces the low frequencies. So when we said the cochlear implant doesn't go all the way apically, it can't fit there. So what happens? Well, the apex isn't very well-stimulated in these designs. And so you will hear descriptions of people who have their implant turned on for the first time, and they'll say it sounds like Donald Duck. It sounds really shrill and very high-pitched. Well, a lot of the apex-- not drawn here-- is not stimulated. So what happens? So these people, after a month or two, say oh, yeah, it's sounding better and better. And so there's some sort of learning or plasticity that makes things settle down, and the voices sound a little bit more normal, maybe not normal, but more normal. And perfectly, as you saw from the graph before, normal word recognition scores can be achieved even though you're stimulating just a portion of the cochlea. Now, I have a movie here, and this gets on my nerves, but I want to show it to you because this is what's shown to patients who are about to get a cochlear implant. Gets on my nerves because you see hair cells in here that have stereocilia that are just waving around, but the stereocilia are really rigid. But anyway, I thought it would be interesting just to see what someone sees when they are getting this stuff from a cochlear implant. Let's see if this movie will play. [VIDEO PLAYBACK] In normal hearing, the hair in the inner ear-- PROFESSOR: I hate this. I mean the best membranes way over here. The hair cells-- -The hearing nerve still remains functional, but the hair cells have been lost or damaged. In a cochlear implant system, sound enters a microphone and travels to an external mini computer called a sound processor. The sound is processed and converted into digital information. This digital information is sent over a transmitter antenna to the surgically implanted part of the system. The implant will turn the sound information into electrical signals that travel down to an electrode array inserted into the tiny inner ear. The electrodes directly stimulate the auditory nerve, sending sound information to the brain. Bypassing the damaged inner ear, the cochlear implant provides an entirely new mechanism for hearing. [END VIDEO PLAYBACK] PROFESSOR: So that's what the patient's see. And how well does it work? So we can ask a demonstrator that we have today. Sheila come on up in front of the class. This is Sheila [? Zu ?], who is a MIT undergraduate. You're a senior now, right? What's your major at MIT? SHEILA: I'm the only in this major at MIT. I'm in [INAUDIBLE] technology and [? society ?] and [INAUDIBLE] is a joint major between Humanities and [? Chinese. ?] PROFESSOR: Are you an overachiever? SHEILA: I don't know. Maybe. PROFESSOR: So has anybody in the class ever spoken to a cochlear implant user before? SHEILA: I know some of them. PROFESSOR: You know some of these people? SHEILA: We're in the same dorm. [INAUDIBLE] in my sorority. OK. Great! So we can do this whatever way you want to. You can ask Sheila questions if you've already asked them to her. I'll ask her questions. Does anybody have any questions? Yes? AUDIENCE: How old were you when you got your implant? SHEILA: So I was born deaf, but I got implant when I was 3 years old. Actually, I got surgery when I was 2 years old. [INAUDIBLE] when I was 3 years old. PROFESSOR: So one question I often get about implants into children is how young can a child be and still be implanted successfully. So the surgeons at Mass Eye and Ear say that the cochlea is adult size by age 1 and 1/2, so typically, that's the age when a person who is born deaf is implanted these days, age 1 and 1/2. The idea to implant early is so that the subject can grow up and enjoy normal hearing, especially during a critical period for language formation, which was maybe starting at 1 and 1/2, 2 years old. So if you implant a person later, in their teens, and they haven't heard sound, they have a lot worse chances of acquiring normal language skills than someone like Sheila who has been implanted early. So the trend is to try to implant as early as possible. SHEILA: I want to point out that I may have been implanted when I was 3 years old, but I didn't start speaking until I was about 5 years old. And I didn't start learning math or learning how to read until I was 7 years old, so I was really delayed back then. PROFESSOR: Did you have a question? AUDIENCE: So I was just wondering, are you like reading my lips right now? SHEILA: Yes, I am. So the way it works, I have to see people's face, like how to read their lips, and I listen too at the same time. I could read your lips alone, but maybe not 100% accurate. Or, if I don't look at you lip, and listen to you, maybe not really understandable, so it's like I have to read lips and listen at the same time in order to understand you. PROFESSOR: But if you don't read lips, for example, in situations like talking on the telephone, can you understand someone on the telephone? SHEILA: It depends on the person. If I'm familiar with your voice, like I know my dad's voice. I can understand him pretty well, but if I'm talking to a stranger on the phone, then maybe not. And also, don't forget, there's a lot of background noises, so that makes it harder for me to hear people on the phone. PROFESSOR: When I-- let's say about 10 years ago in my lab, I hired a research assistant who used a cochlear implant, and she wanted me to shave off my mustache. It was because she had a little trouble reading my lips with my mustache. Now, my wife also has told me I should shave a mustache, but she has normal hearing. SHEILA: I actually had a professor at MIT when I was a freshman, I comment one day I had hard time understanding him because he had like a full beard. Then, next day, he shaved off everything. So he came up to me, I was like, who are you? [INAUDIBLE] PROFESSOR: That's very nice. Wow, interesting! I didn't shave off my mustache, neither for my assistant, nor for my wife. SHEILA: [INAUDIBLE] half is better. PROFESSOR: Maybe. Yeah. So if an audiologist were to test your speech comprehension, do you think you'd get every word correct or do you think you'd miss some? SHEILA: I think I probably miss some words or may not pronounce some words correctly, because the way I hear words may sound differently from what you hear. And sometimes, in English language, some words don't sound exactly the way it's written down. So I think my speech is not bad because, based on my interaction with people, they seem to understand me most of the time. Yeah? AUDIENCE: Do you know any other languages? SHEILA: I know another language. Yeah. I know a couple of languages. I know American sign language. I use it often to help, in some cases, when cochlear implant don't work. For example, if I'm in a loud bar or party and I can't hear people, but if I use sign language, I understand people. I know British sign language too, but that's another sign language. PROFESSOR: So you mentioned when you're in a party and you can't hear people, does that mean that there's a lot of noise that masks speakers and that's a hard situation for you? Right. SHEILA: So like the speaker's voice will blend into other speakers voices or background noises, so I tend to rely on lipreading or some other method to communicate. PROFESSOR: Right. So for example, in cochlear implants, a common problem is when there is an environment where there's many, many frequencies of sound, like a crowded restaurant or a party, and there's one speaker that you're trying to pay attention to and the subject gets overloaded on every single electrode. And so some kinds of cochlear implant processors try to circumvent that by trying to pick out in the spectrum the important peaks of the spectrum. So if you're listening to the vowel aa, you'd have three formants. The processor tries to pick out those formants and only present electrodes corresponding to those formants and turn all the other electrodes off so that there's a huge difference between where the formant is and where the nothing is. Really in theory, it's nothing, but actually, it could be a noisy background. So that is one kind of speech processor design. It's called the speech feature extractor, sometimes the speak chip. It's trying to pick out formants so that it can understand vowels. And it's supposed to be less sensitive to noise masking, which is a huge problem in cochlear implants. A cochlear implant user doesn't have the sharply tuned filter of the normal auditory nerve tuning curve that normal hearing people do. What about listening to music? Do you listen to music? SHEILA: Yeah. Like last month, I went to hear Yo-Yo Ma play. Like when-- I can hear music, but I'm not sure. I think I hear music differently from you guys because there's a whole range of frequencies, like you said, but yeah I can listen to music. AUDIENCE: How often do you go to the doctor for updates? SHEILA: How often do I go to-- AUDIENCE: You're doctor. SHEILA: Oh, you mean audiologist. I see audiologist like maybe once every year just for a checkup and remapping. PROFESSOR: So do you get a remapping or do they just bill your insurance company? SHEILA: Yeah. PROFESSOR: Yes. SHEILA: It's expensive. PROFESSOR: But do they-- do you know if they change the mapping for your electrodes? SHEILA: Yeah, they change it, but they told me it's not really a lot of changes. So I think the older you get, the less change is made than when you were younger. PROFESSOR: Perhaps, yeah. So that's interesting. So how do they do that mapping? Do they say here's electrode 1, and then here's electrode 2. Which is higher? Do they do that? SHEILA: Yeah, so I had to go into a special sound booth. So it's like a cell that is completely soundproof. And they will test me on a bunch of sounds like saying stop if it's too loud, or which one is louder or softer, can you repeat words after me, and so on. And they use all of that input to create a new map. PROFESSOR: Interesting. So apparently with cochlear implant users, the frequency mapping of the electrodes doesn't change in a big way. But in the auditory brain stem implant, they go through yearly checkups and, evidently, the mapping can change a great deal. So it's completely different. In cochlear implants, usually the most apical electrode evokes the lowest sensation of pitch and more basal electrodes get higher and higher sensations of pitch. AUDIENCE: How easy is it for you to differentiate between two voices? Like if you didn't see who was talking and if I said something and then Professor [? Brown ?] said something, how different would our voices sound to you? SHEILA: His voice is deeper, and you're farther away from me. So I think I can tell the difference between you two. I can tell difference between male and female voices. PROFESSOR: Right. Female voices sound higher usually. SHEILA: Higher pictched. Yeah. PROFESSOR: Do you know Mandarin Chinese? SHEILA: Yeah, a little bit. I can speak some Chinese, but not so good because I haven't used Chinese for a long time. PROFESSOR: It's a tonal language, right? SHEILA: Yeah. Oh my God! PROFESSOR: Does that give you-- SHEILA: It's like I went to China 4 years ago. I stayed in China for about a month. So my grandma, she couldn't speak English, so I had to speak to her in Chinese. But it's interesting how it's-- when I talk to people, like when I speak myself, I have to remember how use the tones, but if I listen to them, I can't tell the difference between tone. So what I do is I read their lips and listen. And I use context clues like so if the sound goes with this sound, so I think those sounds form a certain word. That's how I did, but I believe I can learn Chinese with a matter of practice and getting used to the sound. PROFESSOR: Apparently, cochlear implant users have a lot of problems with melodic intervals, octave matches, and tonal languages. The temporal code for frequency that helps us appreciate musical intervals is not present at all in any cochlear implant scheme that's used now. So you only have the place code for sound frequencies, you don't have the timing code in current generation cochlear implant users. And so the goal, remember, is to allow the users to understand speech. It's not in terms of recognizing musical intervals. Now, if cochlear implant companies were based in China, maybe the goal of understanding Mandarin Chinese, which is total, would be more important, but so far, that hasn't happened. AUDIENCE: Are you more comfortable with speaking with people or are you more comfortable with not speaking with people? SHEILA: Well, I'm more comfortable using sign language, but I don't mind going up in front of people and speaking. PROFESSOR: So one time, I had a demonstrator get asked this question. What's the stupidest thing you've ever done with your cochlear implant? And he had a response right away. He said when I first got my implant, I went to the beach. And I was 13 years old, and I was a typical teenager. And I saw someone else with a cochlear implant, and that was great because it was the first person I had ever seen. And so I said, let's swap processors. And that was actually a very stupid thing to do because each cochlear implant user is not only programmed for their coding for frequency, but they're coding for how much shock goes into auditory nerve. And some people who have electrodes close to the auditory nerve don't need much current all, but if your electrode is far away you need a lot of current. And this fellow got a processor that had been dialed in a lot of current, and so he got a big severe shock when you turn the other person's cochlear implant on. So that's something they tell you not to do, right? SHEILA: I don't think anybody told me that. But clearly I was like, OK, total wipe out. That's a bad shock. PROFESSOR: You did that also? SHEILA: Well, we both did. We exchange at the same time. PROFESSOR: Kids don't usually listen to adults, right? So are there a lot of students at MIT who use a cochlear implant? SHEILA: So far, by now, I think I'm the only one. But last year, there were two of us, but he graduated. So this year, I'm the only one. But I'm not the only deaf student. There are like two or three other deaf student, but they wear hearing aids. PROFESSOR: Question. AUDIENCE: How often do you turn it off-- or how often is it off? SHEILA: Oh, I turn it off every night. [INAUDIBLE] I go to bed because there's no point when I go to sleep, right? And when I take a shower or go swimming or if I want to have a [INAUDIBLE] day. On campus sometimes, I would get so tired of listening to people, I would just take it off. PROFESSOR: Which classes do you turn it on and which classes do you turn it off? That's OK. How long does your battery last? SHEILA: My battery last like 3 or 4 days, disposable battery, 3 or 4 days, but rechargeable battery it's like one day. PROFESSOR: And do you have an implant on one side only, or both sides? SHEILA: In my right ear, it's just one side. PROFESSOR: Are you going to get it in the other ear? SHEILA: I'm not so sure because it takes time. I had to go through a surgery, to see doctors, and so on, so I'm not sure at that time because I'm so busy at MIT. AUDIENCE: What kind of alarm clock helps you to wake up? PROFESSOR: Do you have an alarm clock? SHEILA: Oh, yes. I have a special alarm clock. So I know you guys use a typical alarm. They make loud noises. But for me, I use alarm clock and a flashing lamp, so it just flash light on me that helps to wake me up. But some other people say it doesn't work for them, so what they do, they take a small vibrator thing and tuck it under their pillow or mattress, so it's like that then shocks them awake. PROFESSOR: What other kinds of problems do you have with your implant besides noise? SHEILA: I wish it was really waterproof because if I go swimming with my buddies who are not deaf, then how can I hear them. But right now, it's like a computer, so obviously, I can't just jump into water. AUDIENCE: I was going to ask who taught you sign language. SHEILA: Do you know some sign language? AUDIENCE: A little, but where did you learn? SHEILA: [INAUDIBLE] I learned when I was here at MIT. That was about like two years ago, so I took a class at Harvard. And then from there, I met a lot of deaf people here at MIT and outside of MIT, so I was able to be comfortable in sign language. I don't know. I guess it's not really hard for me to learn sign language compared to, let's say Spanish, because it's more official. You don't need to listen or speak, so it's really like all hands and [INAUDIBLE]. So it was pretty natural for me to pick it up. And I use sign language on a daily basis with my boyfriend or with my friends or whenever I ASL interpreter for my class. PROFESSOR: So you often have an ASL interpreter? SHEILA: Yeah, not all, but it depends on the class. For example, if the class is math or science lecture based, like one hour long lecture, then I use [INAUDIBLE] like real time closed captioning. Someone sit next to me, and on the computer screen, I read whatever professor saying in real time, and that person type out everything. Another class, like more of a lab or a hands on class or more moving around, then I use ASL interpreter because it's just awkward to carry around a laptop reading words on a screen. PROFESSOR: What do you want to do after you graduate? SHEILA: Right now, I'm applying to one Ph.d program at Harvard that's a program he is a part of, so he may be my professor next year even. PROFESSOR: Yeah. If you graduate. What's the program? This is a little sales pitch. You can tell them about it. SHEILA: The program is part of Harvard, but it was a part of MIT before. But it's a Ph.d program called Speech and Hearing Bioscience and Technology. Right? PROFESSOR: Right. SHEILA: And it's a program that focus on hearing, cochlear implant, hearing aids, or anything related to hearing and speech. So right now, I'm applying to that program. We'll see how it goes. PROFESSOR: Good. AUDIENCE: This is personal, but did your boyfriend already know sign language? SHEILA: Oh, he's deaf himself, so he knows sign language. But he's like me. He could speak and sign. But difference is he had cochlear-- no wait-- he had hearing aid, I have cochlear implant. AUDIENCE: Do you think that you've become a faster reader? Like do you think you're faster at reading than most people because you rely on it more? SHEILA: I would be more what? Faster? AUDIENCE: Faster at reading. SHEILA: Faster at reading lips? AUDIENCE: Like reading words on a screen or reading text. SHEILA: That's a good question. I never thought of that. It's a possibility because yeah, you're right. Have you seen it in person? AUDIENCE: I haven't seen it. SHEILA: You haven't seen it. So it's like on that comp screen, where she type out words really fast. So I have to read fast. But after one hour, I got too tired to read, so I just look around the room. The good thing is after class, she send me a transcript, so I will go back and look at it again. So I mean, it's really tiring to look at computer screen, for one hour straight, reading words really quickly. PROFESSOR: OK. So the cochlear implant is sometimes called the most successful neural prosthesis, and here we have an example. So let's give Sheila a hand. Thank you very much for coming. And we'll talk next time about brain stem reflexes. So we'll hang around if you have any other questions.
MIT_904_Sensory_Systems_Fall_2013
14_Sound_External_middle_and_inner_ears.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: And I thought that what we'd do today is first go over this syllabus for audition, which is the second part of the course. Just so you get an idea of what's in store for you. And then today's lecture will have a big part on sounds, which have physical properties that are very different than the light stimulus you guys have been talking about so far in the course. And we're going to illustrate the different types of sounds, very simple sounds like pure tones. And very complex sounds like human speech, which have many, many components in them. And then we'll get into the auditory system, first starting with the auditory periphery. And we'll talk about the three basic divisions of the auditory periphery, which are the outer, middle, and inner ear. And today we're really only going to have a chance to focus on the functions of the outer and the middle ears. And so we'll talk about the functions of those structures. So as far as the syllabus goes, each of the lectures has a title. So today's title, October 28th, is sound, external, middle, and inner ears. And each of the lectures has a reading or more accompanying it. And so most of the readings are from the textbook. So there's a textbook, Schnupp, Nelken, and King, which is a very good, up-to-date textbook written by two psychophysicists. And one physiologist, Israel Nelken. And it's written at just the right level for this class. That is an advanced undergraduate textbook. So it's pretty easy to read, or should be very easy to read. And it's written very well. These guys are good writers. They have many examples of auditory demonstrations that you can listen to just by clicking in the margin of the text. The demonstration we'll come up. And as you can see, after today's lectures, I like to give demonstrations. Because I like to listen to what we're talking about in terms of how does it really sound to you as a listener. So I'd encourage you to get that textbook. Now you could buy a hard copy, or if I'm not mistaken, Michelle you can get a free copy online. Is that right? AUDIENCE: There's an online version as well that you can read on [INAUDIBLE]. PROFESSOR: OK. Great. So if you have any trouble figuring that out, let me know. But I think you should easily be able to find the online version. And it should have the demonstrations that you can listen to with earbuds or headphones. There is for today these passages from the textbook. And then for today and for many of the lectures, there is another reading, which is a research paper. This one is by Hofman, Van Riswick, and Opstal. And it's titled, Relearning Sound Localization with New Ears. And this we'll talk about in class right at the end of today's lecture when we talk about the function of the outer ear, the pinna, so-called. And they did a very interesting experiment that addresses what is the function of your external ear. So people always ask me, what am I responsible for in these readings. Well this is a very specific paper. It has a lot of interesting research methods. The subjects were human volunteers. And there are a lot of details in there that are not that important. What I'm really focused on having you learn is the take-home message. And the take-home message from this paper is, what is the function of the outer ear. And what is this twist in the title, how can you relearn sound localization with different outer ear. We'll talk about it in class. But I want you to get the take-home message from these research studies. Because there sort of what we do as professionals in the auditory system. Our day-to-day living is doing research. In some cases on human subjects, in some cases on individual molecules. But how can we learn about hearing from doing these research studies? And I have picked good papers, good research studies. Because they really tell us something. There's plenty of stuff out there that gives sort of equivocal results. But this is a really good paper. And you have a take-home point from it about how you use your outer ears to localize sounds. So that's an example of a research paper that goes along with this lecture. So just coursing through the syllabus-- on Wednesday, we'll have a lecture on hair cells. Next week we'll talk about the auditory nerve, which is the nerve that sends hearing information from your ear into your brain. And we'll talk about frequency resolution-- how we can tell one frequency from another. At the end of next week, we'll be talking about the brain, the cochlear nucleus, and all the interesting unit and cell types in the cochlear nucleus. The following week, we're going to be talking about hearing loss, how there can be problems with your hearing. Many of them are treated at the hospital where I do my research, which is Massachusetts Eye and Ear Infirmary across the river. And there, when the surgeons in my department encounter a deaf person, they give them the option to get a cochlear implant. So a cochlear implant is a device that can be put in your inner ear. And it can restore your sense of hearing. And we'll have a demonstration by that cochlear implant user who comes to class on that date and gives a demonstration of her cochlear implant, which has restored hearing to her, although not perfectly. So then later on in the semester, we'll talk about various other topics on up through the auditory cortex. And finally, we're going to have a tour of the Hearing Research Laboratory at the Massachusetts Eye and Ear Infirmary, where we'll meet over there and we'll encounter various research projects that are currently going on. And we'll talk about them. There is a written assignment. I guess you guys had an assignment for vision in the class-- a written paper? So we have an analog here on the auditory system. And this is the assignment you can read it later at your leisure. It won't make much sense right now, because we haven't talked about neural circuits for localization of sounds yet. You can look on the syllabus. It's about halfway through the second part of the class. And there's a lot of details here. And it asks you what's updated since an original model was postulated by a researcher called [? Jeffrus ?]. So that's a paper-- I don't think I said how long it should be. How long was the paper for vision? Was there a link? AUDIENCE: [INAUDIBLE] PROFESSOR: Four to six pages? OK, four pages sounds good. If you really want to write six, you probably could. But we'll talk about this when we talk about sound localization in the class. And I think the due date here is written. It's the date of the lab tour. And then, we have a final exam in the class. And I think as Doctor Schiller talked about at the very first day, the final exam will be waited toward the auditory system, which we haven't had a test on by the time this exam rolls around. So I think it's going to be 2/3 audition on the final exam, and 1/3 vision. And there are several review sessions for both senses planned at the end of the semester. So any questions about the organization of what we're going to do? OK, so I'll start today's lecture. And I think the PowerPoint files-- for today's lecture and all the rest of the lectures for the rest of the semester are available in the course website. So you can look at them now or as the lecture comes up. So first, we're going to talk about the physical characteristics of sound-- just very, very different than the characteristics of the light stimulus. And maybe light stimuli are so obvious that Peter Schiller probably didn't spend much time in his lecture about it. But I'm going to spend 10 or 15 minutes here on the physical characteristics of sound. Because it's very different than the light stimulus. So sound is a mechanical, radiated energy, transmitted by longitudinal vibrations of a m so you have to have a medium to transmit sound. You can have light go through outer space in a complete vacuum. But in outer space, you can't have sound because you have to have a medium. The medium can be various types of things. We're going to talk mostly about sound in air. But you could have sound in water-- whales make songs, and they sing to each other, and one whale listens to another. And in between the two is a medium of water. You can have sound in a solid-- if you live in an apartment room, and you hear your neighbors' music, you especially here the base. Because the low frequency sound transmits pretty well through solids, like the solid of the wall in between the two apartments. Sound can go in many, many different types of media. In air, like we're going to use mostly for this course, you can think of sound is being produced by a sound source like the piston of your loudspeaker. And the piston goes back and forth. . It's driven back and forth by an electric voltage and when it goes this way, it presses on the air molecules in front of it. And it presses them so they're closer together and makes them a little bit higher in pressure. And that's what's meant by this compression or condensation. And these dots close together means a little bit of an area of high pressure. Then as the piston moves the other direction, it rarefies the air. It drags some of the air with it. And so that little space right in front of the piston has a lower pressure. Because there are fewer molecules per volume than before. So this energy, then, is transmitted through the medium to whatever-- a microphone, which detects sound, or a listener, which can listen to the sound. If you have a microphone or some kind of detector that can plot that pressure at any one point-- let's say the microphone is right at the edge of this paper. And you graph the pressure as a function of time on this graph. So here's pressure and here's time. As those radiated energy wave fronts pass you, the pressure will go up. And then it will go down. And then it will go up and go down. And it will repeat over and over as long as that piston moves. So this horizontal line is simply the barometric or static pressure of the air. And sure, the barometric pressure changes a little bit. If there's a hurricane coming, it gets way low. If there's a high pressure like we have right now-- sunny climate, the barometric pressure goes up. But those are very slow fluctuations. And the sound wave form is a very, very fast waveform that goes many times per second. In fact, what we call as sound frequency is the number of oscillations of that pressure wave per second. And they are very fast. As you can see down here on this so-called audiogram or frequency curve for human hearing, the frequencies, which are on the x-axis here, go from 10 Hertz-- Hertz means cycles per second-- so one Hertz is one cycle per second. And that is a frequency that's so low it didn't get on this graph. Because humans aren't sensitive to frequencies that slow or that low. Usually the lower limit for human hearing is considered to be about 10 cycles per second, or 10 Hertz. And it extends all the way up to 20,000 cycles per second. And in the middle of the human range, we'll be talking about hearing a lot at a middle frequency of about 1,000 Hertz. So that's a nice, round, middle frequency for you to remember for human hearing. So we're talking about pressure oscillations in terms of thousands of times per second, or hundreds of times per second. So they're very fast. There will be examples during our course where the auditory system-- the auditory neurons keep track of those cycles, even though they're going back and forth thousands of times per second. So we'll come back to that in future lectures. Now this is the audiogram for human hearing in the solid curve here. This is supposed to say human, if you could read it. And on the y-axis is how strong the stimulus is, how loud it is, or in terms of physical characteristics, what the sound pressure is. And this scale goes from minus 20 to 140. And the units are dB SPL and that stands for decibels sound pressure level. And whenever you hear level in a formula, you should perk up your ears and say oh, that means there's a log-- a logarithm-- in the formula. And sure enough, the formula for a sound pressure level is 20 times the log of whatever sound pressure you're talking about, whatever you were listening to or measured by your microphone divided by some reference pressure. That's the formula. And the reference pressure is given as 20 micronewtons per square meter. OK, so let's figure that out. What is Newton a unit of? Anybody? AUDIENCE: Force. PROFESSOR: Right, force-- and meter squared is area. So we're talking about force per area, and that's pressure. So Newton obviously was like Hertz, one of the people who was interested in physics. And a Newton is a unit of force per square meter is pressure. Now in more modern terms, the unit micronewton per square meter has been renamed to be a pascal, abbreviated Pa. So it's the same. One Pascal is one Newton per square meter. In this case, we're talking about micro-- Newtons are micro Pascals. So why is that number chosen as the reference for this very important sound pressure level scale? Well, it's actually chosen with the hearing system in mind. What they did in the 1930s, when this was being developed, is they rounded up a bunch of people at a county fair, gave them headphones, and said we're going to try a nice mid-frequency. Let's try 1,000 Hertz. They gave them a tone at 1,000 Hertz. The listeners listened to it. Then they said I can hear that fine. Then they turned the level down a little bit. And the person said yeah, I can still hear that. Then they turned it down so much that the person didn't say, I hear something. There were silent. They turned it up a little-- says yeah, I hear-- They turned it down. They titrated the levels until it was right at threshold, just barely detectable. And they took an average of 30-some people. And they said that is going to be the basis of our sound pressure level scale. So it's actually a term that was derived biologically by testing people's hearing. So that's kind of a nice story. I wonder if it's true. Well, let's look at it. Where does the human hearing curve, that 1,000 Hertz, fall? Where should it fall if 20 micronewtons per square meter is the pressure you're talking about? It's the same as the reference pressure. What's 20 over 20? It's 1. What's the log of 1? Zero. Correct. 20 times the log of 1 is 0-- sound pressure level 0. Well, look at our curve right here, that 1,000 Hertz-- it's pretty close to 0. Why might it not be exactly zero? Well the people that were used for this curve were a little bit different than the ones in the county fair. We'll study later on that some people have a hearing loss. Hearing can be affected by the room that you used. Maybe there was a lot of yelling and screaming at the county fair. We have better rooms to test hearing now. It turns out that the human hearing curve is actually a little more sensitive at 2,000, 3,000, and maybe 4,000. So when the pressures go below the reference pressure, the number becomes less than 1. And the logarithm becomes negative. It's perfectly fine to have a negative SPL. We have some points on the graph for that-- minus 2, minus 3 dB. This other dashed audiogram, or hearing sensitivity curve, is for a different species-- the cat. And the cat here's down to about minus 10 dB SPL-- at least this group of cats did. The cats also hear higher in frequency than humans. Dogs and cats can hear about an octave higher-- that is a doubling of frequency higher than humans do, and maybe some of you have had dog whistles that you blow. And you don't hear anything. But the dog comes because it's a very high frequency beyond the upper limit of human hearing, but well within the hearing range of those species. So different species have different hearing ranges. AUDIENCE: Professor? PROFESSOR: Yes. AUDIENCE: Sorry-- just to clarify, is a micropascal then [INAUDIBLE]? PROFESSOR: No. These are units of pressure-- micronewtons per square meter-- and this is a unit of pressure. SPL is just in these units called decibels. And it it's not a pressure-- AUDIENCE: It's the log of that. PROFESSOR: That's right. It's the log of that. Any other questions? So these are sort of the lower limits of hearing. When you go into conversational levels, or the level of a lawn mower, or the level of a concert, the levels get higher-- still certainly within your audibility range. As you go to a higher and higher level, you risk damage to your hearing. And at that risk level, which it says high risk thresholds here. And right around 120 dB, sounds become painfully loud and damaging to your hearing. And that's what this shaded area refers to-- gunshots, jet aircraft engine. And we'll talk about that during our lecture of hearing loss. So I have some demonstrations. Because a lot of people have trouble with the decibel scale. So what is a decibel? And what does it sound like when you change the sound from 50 dB to 60 dB? Well this demonstration has three parts. And let me read the text first. Broadband noise-- sometimes it's called white noise. Broadband noise and white noise are synonyms. And what is white light as a visual stimulus? AUDIENCE: All wavelengths. PROFESSOR: All wavelengths, right? And so broadband noise means it has all frequencies. It has 10 Hertz, 20 Hertz, 30 Hertz, 1,000 Hertz, 2,000-- it has all frequencies. And it sounds like the "shh" sound. So you hear this "shh." it'll start out pretty loud. It'll be reduced in ten steps of six decibels for each step. And I think you'll be able to very clearly hear the difference between the first and the second steps. And demonstrations are repeated once. The second demonstration is same noise is reduced in 15 steps-- now of three decibels. So this is a little bit of a smaller scale, though you'll still be clearly audible. Third, broadband noise is reduced in 20 steps of now one dB. So let's listen to see if we can hear 1 dB steps. RECORDING: The decibel scale-- broadband noise is reduced in 10 sets of 6 decibels. [INAUDIBLE] repeated once. [TONE] [TONE] OK, was that clear-- the difference between one and the other? So that's what 6 dB sounds like? Now, you guys who are up here close to the speakers, you might be starting at 85 dB SPL on the first ones-- pretty loud. 6 dB lower is 79. And then, so on and so forth. You guys at the back are further from the speaker. You're not starting at the same level. You might be starting at 60 dB. You're still going down 6 dB to 54 dB in the next step. Everything is linear in here. It doesn't matter where you start from, as long as you're going down 6 dB. So where you start doesn't really matter in these demos. RECORDING: Broadbad noise is reduced in 15 steps of 3 decibels. [TONE] [TONE] PROFESSOR: OK, still clear the increment between one and the other? OK, now here's the one dB steps. RECORDING: Broadband noise is reduced in 20 steps of one decibel. [TONE] [TONE] PROFESSOR: OK so how about for that? Would you be able to stake your life on the fact that you could tell one from another? No, I see a lot of heads shaking. Well if you sit there and do this over and over again, and really train yourself, apparently 1 dB is the just noticeable difference that most observers can here. So 1 dB is the just noticeable difference in SPL. So how do we do that? Well you have an auditory nerve. And at 60 dB, your auditory nerve fibers are sending this many spikes to the brain. At 61 dB, they're sending maybe a few more spikes-- something like that. It's not absolutely clear how you do that. There is more information coming in from the ear to the brain as a function on sound level. We'll talk a lot about that. Now, we also talked about sound frequency. JND for sound level is about 1 dB. What is it for sound frequency? We're going to have pretty much a whole lecture on that. But your ear is extremely good at telling one frequency from another. So if you start at 1,000 Hertz and change it to 1,002 Hertz-- very, very small change-- you can tell the difference. Your ear is a fantastic frequency analyzer. We're going to have a whole lecture on exactly how your ear does that. But the JND for sound frequency is also a good demonstration. We'll play that when we talk about sound frequency coding. OK. Any questions about that so far? OK. Let's switch back to the physical characteristics of sound. And these are some very common auditory stimuli. We've heard a noise just now. And if you graph the sound pressure as a function of time, this is what the waveform looks like. How could you do that? If you take a microphone, stick it out in front of a noise source, and run that into an oscilloscope, the microphone converts the sound pressure into a voltage, the oscilloscope displays the voltage signal as a function of time. You can look at that. Auditory scientists like to look at things as a function of time, of course. They also like to look at things as a function of sound frequency. This is a graph for this same stimulus, a noise stimulus, now as a function of frequency. And we said before, the noise is broadband. It's white noise. It has all frequencies. And here is the graph to show you that. This might be the energy, and this is as a function of frequency. So it has all frequencies. It's trailing off a little at the very highest. That may be because the microphone couldn't wiggle back and forth at very, very high frequencies. But it's essentially a flat frequency curve. And sometimes this display is called the spectrum. So spectrum or spectra are graphs as a function of frequency. Sometimes people talk about this as a frequency domain and the time domain. If you've taken any electrical engineering courses here at MIT, people will talk about the time and frequency domains. And how can you go from one representation to another? Well, you can take your microphone signal instead of going to the oscilloscope, going to the spectrum analyzer, which is a machine that can give you this nice plot. But how about mathematically? How can you do that? The Fourier Transform, right. Of course, Fourier was a mathematician who studied various things, heat transfer and other things. He developed this transformation. If you have the mathematical description of a time-varying signal, you can plug it through his equation, the Fourier transform, and come out with the frequency representation or the frequency domain. Or, vice versa, if you have the frequency domain, you can inverse Fourier transform and go back to the time domain. We're not going to talk too much about transforms here. But it is interesting, because, as it turns out, your inner ear is a wonderful frequency analyzer. It can tell the difference between 1,000 and 1,002 Hertz. This is a very nice way in the ear of detecting the different frequencies. And so these time and frequency domain representations are very convenient for us to look at. So just keep that in mind. Here's a very common auditory stimulus, the pure tone or the sinusoid. This is a sinusoidal waveform in the time domain. In the frequency domain, it only has one frequency-- the frequency at which that thing is going back and forth in terms of Hertz. This is in a Hertz axis. So sometimes it's called a pure tone. Why is it so pure? Does it have high morals or what? No, it just has one sound frequency. These other stimuli, we're going to listen to this in just a minute. This is a so-called square wave. Imagine trying to add up a whole bunch of pure tones to result in a square wave. It seems impossible, right? Well, it's possible if you use an infinite number of frequencies. And this frequency representation for a square wave goes on basically forever. To get those corners of the square wave sharp like a true square wave, you need lots of individual frequencies, lots of pure tones, if you will. Tone bursts are some common auditory stimuli. We'll talk about those later in the course. Click is a very common auditory stimulus. It's a sound like this. Or last night, it was the sound of a fastball hitting a wooden baseball bat. It's a very sharp, impulsive sound, very nice sound if you're behind the team who's batting. So a click, that baseball hitting the bat, doesn't happen for very long. A click can be infinitesimally short. The time that the baseball is in contact with the bat is pretty short. And if it's very short in the time domain, then you have all frequencies. So it's another example of a broadband or broad spectrum sound. If the click is infinitesimally short, the spectrum is completely flat. Those are some common auditory stimuli. Let's go through some more complicated, and maybe more interesting, sounds. Well, all of us like to listen to music, right? So here are some examples of musical sounds. This is a piano keyboard. And here is the spectrum or frequency representation of what you get when you strike one key on the piano keyboard. So that's one note. Well, sure, it sounds like one thing, but you have a whole bunch of different frequencies that go along with it. And why is that true? Does anybody know? Why do you get a whole bunch of different frequencies when you strike a key on the piano keyboard? Yeah? AUDIENCE: Isn't it vibrating all along the length so there's different wavelengths? PROFESSOR: What's vibrating-- AUDIENCE: It's not-- PROFESSOR: In the piano? AUDIENCE: It's not-- it's like an infinitely small portion of the string. It's the longer string. It's parts that are shorter still vibrating. PROFESSOR: Yeah, you're getting there. In the piano, the string is fixed at one end, and it's a long string. It [? fits ?] [? in ?] the other. And your key that you press down makes a hammer go up, and there's a bunch of linkages. And eventually, the hammer hits that string somewhere. And the string, it's fixed here. It's not going to move. It's fixed here. It's not going to move. But in between those points, it can move. So it can vibrate like this, or it can go up and down. It can also vibrate like this. You can have what's called a node in the middle. In fact, if you put your finger right here and fix that middle, it wouldn't allow the string to vibrate in this uniform fashion. But it would allow this half to vibrate and that half to vibrate. This node is sort of a constraint for this string. It can also vibrate like this. I wish I had a different color. Over here? Great. OK. You can also have the string vibrate like this. OK. And it can vibrate in many, many different patterns. I've just drawn a few. What's interesting is that this length is twice as long as this length, which is twice as long as this length. And what would you expect the time of those vibrations to be? Well, the big long thing is going to vibrate pretty slowly. That's what's called the fundamental frequency. The thing that's vibrating in two parts, it's shorter and it can vibrate faster. In fact, it vibrates twice as fast. So the first harmonic is twice the frequency of the fundamental, and so on and so forth. You can get from the physical characteristics of the vibration of that string a whole bunch of different vibration patterns. And they're usually a harmonic series-- twice, three times, four times, five times, six times-- the fundamental, just because of the physical characteristics of vibration of the string, and the wind column in the case of an Alto saxophone. When you hear that one note hit by the hammer, all of these vibrations are happening at once. And so that one sound sounds like one thing. Musicians will say it sounds like a note-- A above C. But you have a whole bunch of different harmonics in it. What is pitch? Pitch is very interesting to people who study the auditory system, to musicians. Pitch is that attribute of the sensation, auditory sensation, in terms of which sounds can be ordered on a musical scale. Let's say I didn't let you see the keyboard, but I recorded the sounds, and I press some sounds down there, some in the middle, some way up here, some way at the high end, and I gave you 20 different recordings, and I said, well, make a ranking of them. Put these down low. Those are number one and two. Put these in the middle-- those are number 10-- up to the high end. The highest one is 20. You could do that. The ones that were down low would be called those with low pitch. The pitch of a pure tone, of course, depends on the frequency. That's as if you were just giving one. If you move that around, up high end frequency, it sounds like a really shrilly, high-pitched sound. If you move it down low, it sounds like a real low sound. The pitch of a complicated sound-- that is, with many overtones and harmonics-- depends strongly on the fundamental frequency. But sometimes, the fundamental-- for example, in this guitar sound-- is pretty weak. And in some cases, you can take it out altogether. The pitch doesn't change that much, surprisingly. So somehow, the ear knows by this pattern of spectrum that there should be a fundamental [INAUDIBLE] that can stick it back in. So that's what pitch is. Another sensation that musicians often talk about is the timbre of a sound. And the timbre is the quality or the identification of a sound. It relates to the highest harmonics here and the pattern of this harmonics. For the piano, it's starting big and sloping down. For a guitar, it's starting small, sloping up, and then sloping down. The timbre is what allows you to identify that sound that you heard as a piano. We can all hear a piano and say, that's a piano. We can all hear a guitar and say, that's a guitar, or that's an electric guitar, because its pattern of harmonics, its fundamental harmonics, differs. That's how we identify sounds is by their timbre or their spectrum, if you will. Those are pretty complicated sounds. What do I have next? I have a demonstration. This one is called Canceled Harmonics. And it's a very nice demonstration to illustrate the idea that I said, when you have all these harmonics go on together, it sounds like one thing, one note, one sound. But if you take some of the harmonics out and put them back in, you're aware of that taking out and putting back in. So what they're going to do is a complex tone is presented, followed by several cancellations and restorations of a particular harmonic. And let me show you what complex tones they're going to give you. It's simply this square wave. This is what you're going to be listening to. It sounds like [MAKES BUZZING NOISE]. It's not very musical at all. And it has a fundamental and a whole bunch of harmonics, an infinite number. When that complex goes on at once, you're going to say, that sounds like a nasty sound. It sounds like a buzz almost. Then they're going to take this one harmonic and pull it out, and then they're going to put it back in. As they do that, you're going to say, well, that sounded differently. When it was out and when it was back in, I could hear that thing going in and out. And then they're going to do that for the second, third, and fourth on up to, I think about 10 or so. Even though this whole constellation sounds like one sound, when they pulse these things in and out, you can tell. Let's listen to the demonstration, and let's see how many times they're going to do it. This is done for harmonics one through ten. Canceled Harmonics. A complex tone is presented, followed by several cancellations and restorations of a particular harmonic. This is done for harmonics one through 10. OK. Could everybody hear when this complex went on all at once it sounded like one sound? Then when individual components were taken out and pulsed back in, you could identify them. Your ear is very good at distinguishing the various frequencies in a complex spectrum. All that message is sent to the brain as individual channels, and the brain somehow perceives that when everything is going on at the same time, that's one sound. It's really not of interest to the brain that the string is vibrating a whole bunch of different frequencies. It's that there's one string vibrating. But if you took out one of these modes-- in other words, if I put my finger here and the fundamental goes away, you ear is very good at detecting that. And it sends a message to the brain that the fundamental is no longer there. And the brain says, something different has happened. So the ear is very good at recognizing those different characteristics. The brain is good at putting them back together and saying, they started at one time, so it's one object. Questions about that so far? Now, the last type of complex sound that I want to cover is speech sounds. And I want to save most of this for the end of the semester when we talk about the parts of the auditory system that are active in distinguishing different speech sounds. But let me just-- because we're talking about sounds and complex sounds, talk about speech sounds. This is a diagram of your vocal cavity. Way down at the bottom here, you get air from your lungs that goes through your trachea. And in the trachea, there's these vocal cords, if you will, that are scientifically called the glottis. The opening in between is the glottis. So air can come out, or if you use muscles associated with your vocal cords, you can close that off. As the air comes out from here, it moves those vocal cords back and forth. And they hit each other, and they open up, and they hit each other and open up. And as they do that, they interrupt the airflow and they allow it to pass through. And they interrupt it, and they allowed it to pass through. And if you were to put a microphone way down your trachea right above those vocal cords, you would see this time waveform. The pressure would go up right as the air pressure is coming from the lungs when the vocal cords were open. When the vocal cords are shut, there's no pressure there, or it's just atmospheric pressure. So this opening and closing of the air through the glottis forms this very complicated waveform. If you look at the spectrum of it, it has a whole bunch of different frequencies. The lowest of the frequencies is the frequency that these things are opening and closing. But there's a whole bunch of harmonics. It's a very complicated spectrum. The upper part of your vocal tract is what's called the filter. And it serves to emphasize some of those harmonics and de-emphasize others. And the filter function is indicated here having three peaks. Those peaks are called formant peaks. They have to do with the shape and dimensions, lengths and widths of your upper vocal tract. What's kind of neat is by manipulating, let's say, where your palate is, and where your lips are, and where your tongue is, you can change that filter function by using the muscles that move things around in your upper vocal tract. And after you've filtered this complex spectrum, you come out with a function where some of these spectral peaks are emphasized and some are not emphasized. And here's the function that you would get right outside in the air outside the front of your mouth. This is the time wave form here. Here are some examples of manipulation of your upper vocal tract. For instance, here the lower part of the mouth is moved way up high, and it produces an acoustic spectrum where you have a big f1. And f2 and f3 are small, and they are way up high in frequency. Contrast this with when the bottom of your mouth is lowered and moved backward. Here, F1 is even lower. F2 is quite low. And F3 is moderately low. And these are, of course, the way you pronounce different vowels. We can all say these two vowels. This is the vowel "i" as in "hit." Everybody say that-- hit, hit. You can kind of feel that the lower part of your mouth is moved upward. Whereas if you do something like this-- "a" in call. Call-- everybody say that. Call. You can feel the lower part of your mouth dropping down as indicated here in making a big cavity, whereas here the cavity is very small. It changes the acoustic spectrum. Our ears pick it up. And our ears are very good frequency analyzers. And they say the spectrum here sounds like hit, because you've learned to associate that spectrum with that vowel. This is a different spectrum. Our ears pick it up and they say, that's the vowel "a" as in "call." That's how speech sounds are formed. At least this works very well for vowel sounds. It doesn't explain things like consonant sounds, which of course are many different kinds. There's stop consonants where your lips close down before you utter the consonant "p." So "p," everybody close their lips down, and then all of the sudden you open it up. It's a completely different thing. That's not modulating the spectrum. That's modulating the time pattern. These vowels are distinguished by their different spectral patterns, which is picked up by your years. So I just thought you'd want to know about that. Speech sounds are among the most complicated acoustical sounds because of the number of frequencies involved, the formation, and of course the perception of telling, for example, one vowel from another. Let's shift gears and move on. And instead of talking about the physical characteristics of sound, let's talk about how we hear sounds. We're only going to get as far as the auditory periphery today, but let's just define it. The auditory periphery is this whole structure indicated here, and it's usually separated into three parts-- the external ear, the middle ear, and then the inner ear. Those are the three very big divisions of the auditory periphery. In the external ear, you have your pinna. Here's your pinna. You have the ear canal, which goes down about three centimeters inside your head, and it ends up at this yellow structure here called the ear drum. Tympanic membrane is the scientific term for the ear drum. That's the end of the external ear. The middle ear is an air-filled cavity. So we're still talking about sound in the ear. In that middle ear cavity are three small bones. They're called ossicles. I think-- yeah, here we go. And in high school biology, you probably learned them as hammer, anvil, and stirrup. But the scientific names are malleus, incus, and stapes. And they convey these sound vibrations of the ear drum. When sound hits the ear drum, it causes it to move. And these bones are linked right onto the eardrum, and they're linked one to another. The ear drum then moves the bones, and the bones finally end up, in the case of the stapes, in the inner ear. So that's where the inner ear begins. I have a demonstration of ossicles, and I'll pass them around. These are ossicles from a guinea pig, and they're glued to the bottom of this little vial. And I made a crummy drawing of them. But if you hold this vial so that the piece of tape on it is downward, you get this view here. You have the stapes. And I didn't list the other ones. But in the guinea pig, the incus and malleus are fused, so they can be considered one. This is definitely part of the malleus. But I don't know where the incus ends and the malleus begins. If you had an ear drum, it would be this dashed line here. So let me just pass these around. And you can probably appreciate from my diagram how the high school biology name for the stapes got its name. It's the stirrup. What's the stirrup? Does anybody know what a stirrup is? Yeah. When you ride horses, what is the-- AUDIENCE: You put your foot in it. PROFESSOR: You put your foot in it. And that's why cowboy boots have a nice big heel, so your foot doesn't go all the way through it. It sticks in your heel. So this is the stirrup. You put your cowboy boot right in there until your heel hits this foot plate. That's pretty obvious how that got its name. It's the foot plate where you put your foot. Your foot goes right on that. And that foot plate is the beginning of the next division, which is the inner ear. And by the way, I should point out before I forget-- what is the smallest bone in the body? All answers are given. The stapes is the smallest bone in the body. Why? It's got to move. And the lousy little sound-- it's this tiny little ear drum. Remember, the ear drum is basically a tiny, little thin piece of skin. It's like Saran wrap. When your doctor looks down your ear canal, that doctor can look right through the ear drum. It's so thin. It's like plastic wrap. The doctor can look into the middle ear and say, so much fluid in there. You've got a middle ear infection. Or they can say, middle ear looks good. You've got some other problem. That's what we're looking at. They're looking with their otoscope and a light right through the ear drum into the middle ear. And that whole middle ear drum and the ossicles have to vibrate when there's a tiny little sound like a pin drop. The pin drops right there, and you can hear it because these things are so light and flexible that they can vibrate-- and so small. The stapes foot plate ends up at the cochlear. And the cochlear is the main part of the inner ear. And cochlear, as it says here, gets its name from the Greek word kochlias, which means snail. And certainly, the inner ear looks like a snail shell. And in the inner ear, here's where sound changes from sound in air, or maybe sound in the bones. The inner ear is filled with fluid. And inside the inner ear are these wonderful receptor cells for hearing and the beginning of the auditory nerve. Here's the auditory nerve that's sending messages centrally into the brain. So the brain would be beginning right here. This whole structure here, all this gray stuff, and even the shell of the cochlear is bone. And it's your temporal bone. The temporal bone is the hardest bone in the body. You can have a severe blow to the head and that temporal bone will keep all these structures intact. It's very, very hard bone. Surgeons at our hospital do a lot of drilling with the dental drill. They get down to these important structures, because they have to manipulate them. These loops here are part of the inner ear, but they are part that is sensitive to vestibular sensation. So those loops are called the semicircular canals. They are almost circular. They are in the three planes, X, Y, and Z. And when you rotate your head, let's say, side to side, one of those can move. And the receptor cells in it can sense that movement and detect that your head had moved. And it's very important, because if you want to keep your eyes fixated on one point but move your head, you can do that by the vestibulo-ocular reflex. The neurons from this the vestibular system send messages into the brain stem, and eventually they go through coordinating centers into the motor neurons for the extraocular muscles, which can, of course, move your eyes when you want to do a [? secade ?] or pursuit, or they can keep your eyes stabilized, which is moving them with respect your head even though your head is moving. But we're not going to talk about those. Let's talk about the function of the middle ear and the external ear. That's what we're going to talk about for the rest of today. I have a model. Let me just pass around this model. I think we passed around before on the first day of class, but you can look at it again, because we're going into more detail today on this structure. This comes apart. Here's your pinna. Here's the long ear canal. Here's the ear drum. And if I tilt this here, you can see the structures we're talking about in the inner ear-- the cochlear, the semicircular canals, and this yellow structure here is the auditory nerve. It's been going into the brain. The brain is cut off here. This is the eustachian tube, which is a way to vent the air-filled middle ear. So you want to purge that with air. If you go up hiking in a tall mountain, the barometric pressure outside gets lower. You want to equalize that in your middle ear. You open that eustachian tube, usually by swallowing. The ossicles are here. And if you take out this inner ear, the stapes is fixed with it. So you can see the stapes. In terms of size, this whole inner ear-- the cochlear is about the size of an aspirin tablet in a human. It's about that size. OK. Let's pass that around. OK. What is the function of the middle ear? Why do we have these three bones? Why do we have the eardrum? Why doesn't sound come right in and strike the inner ear itself? Well, it turns out that if you look at the physical characteristics of sound in air, and you want to get that airborne sound to sound in water, different medium. So this is fluid or water. This is air. Sound is coming in here, and you want to get it into the fluid of the inner ear, which is essentially water. If you don't do anything and you have the sound coming in here, most of it bounces back off. In fact, 99.5% of the energy of sound in air at a fluid boundary is reflected back into the air. So if you're in a boat here-- I didn't draw this right-- you're in a boat here, you're fishing, you're talking to your buddy in the back of the boat and you say, pass me another beer, and your buddy says, be quiet, you'll scare the fish-- actually, the fish can't hear you. Because most of the energy in your saying "pass me a beer" bounced right back off into the air. So how does the auditory system deal with this? We want to listen very carefully to a pin drop, but most of the energy bounces back off at this boundary between air and fluid. That's the job of the middle ear. Here is how the middle ear moves. This is a nice movie made by Heidi Nakajima at Mass Eye and Ear Infirmary. This orientation is a little bit different, but this is the ear drum. This is the malleus, the incus, and the stapes. Together, they're the middle ear. I said this inner ear is the cochlear here, and it's encased in bone-- fluid encased in bone. So how does this stapes work? Well, there's a little window in the bone. It's called the oval window. And the foot plate of the stapes pushes on that oval window. It's not indicated here, but it's right underneath this oval part. There's another window called the round window. That's indicated by blue there. And it's just a pressure relief point, because if you pushed on fluid, it would push back to you. Fluid is relatively incompressible. So this pushing in means this membrane over the round window can push out easily. So it's easy to push in and pull back, because this membrane can give. As you can see, the motion of these bones is coming into the fluid quite nicely and changing some membranes inside the inner ear. The job of the middle ear is so that most of that sound energy gets into the fluids of the water of the inner ear. How does it do that? The primary way is by changing area. The eardrum is this big drum, and the stapes foot plate is this much lower in area or smaller structure. And there's some formulas here. p1, a1, those pressure and area at the tympanic membrane, equals p2 a2 where the same characteristics at the stapes foot plate. So when you decrease the area a lot, a2 goes way down. p2 has to go way up. So that's then the main way that the middle ear allows sound and air to go into sound and fluid. The engineers would call this impedance matching. And they would say that when you change media, the impedance of one medium being different from another means that most of the energy is going to bounce back off. If you have a device here like the middle ear to make the impedances more matching, much of this energy is going to then go through the boundary from one medium to the other if you match the impedances. And one way of matching the impedances is to change the areas. Another way-- and this may be the reason we have three and not just one middle ear bone-- is by a lever action. So this is kind of like a lever where the fulcrum is off to one side, not right in the middle. And you can obviously get force amplification from a lever action. A third mechanism might be a buckling of the tympanic membrane. And you'll have to read-- I'm not an expert on that at all. I'm not even sure if that's even in vogue these days. But these actions are much less than the change in area offered by the eardrum. So what happens when a patient comes into the Massachusetts Eye and Ear Infirmary, and for some reason, either via an accident or a developmental problem, they don't have an eardrum, and they don't have these three ossicles. So the sound goes right in from the outside and strikes, let's say, the round window of the cochlear. Are they deaf? Well, no. They have a hearing loss. Some of the energy gets through into the fluid. How big is their hearing loss? Well, this is the so-called audiogram that's generated when you visit a hospital and you complain that your hearing isn't so good. They send you down to the Audiology Department. They put you in a testing booth. They put earphones on, and the tester goes outside so they don't make any extraneous noise. And they say, raise your hand when you can hear a sound. So they test your hearing-- this is the so-called audiogram-- and plot it on the y-axis as the amount of hearing loss in decibels. It's just the way they plot it. And plot it on the x-axis as the frequency. And they typically test 2,550, 1,000-- which is abbreviated here 1k-- 2k, and 4k. They typically don't test the extremes of the human hearing. They test the middle range. This is the range over which most speech sounds are made. And that's the most important for most people. When they say, I can't hear very well, it means they can't understand somebody when they're speaking. And this is the audiogram from someone who lacked a middle ear. And this 40 dB here-- across all the different frequencies, approximately 40 db-- is the amount of hearing loss they have. So if you go back to the audiogram that we had in the first slide of today's lecture, everything would be lifted up by 40 dB. You have a 40 dB hearing loss. You're not deaf at all, but that's a moderate to severe hearing loss, a 40 dB hearing loss. You might have problems-- you certainly would have problems hearing a pin drop. You might have problems hearing a telephone ring if it were on the other side of the room. You might have problems with conversation. A treatment to that would be several types of treatment. The surgeons in the Ear, Nose, and Throat Department at Mass Eye and Ear could reconstruct your middle ear and your eardrum. They could use a skin flap, a piece of skin taken from somewhere else on your body, put it in the place of the eardrum. They could use some either wire or Teflon or plastic pieces that could connect that eardrum into the oval window of the cochlear. So they can reconstruct the middle ear fairly easily. If the person doesn't want to have surgery, they can have a hearing aid. Essentially, you have a flat frequency loss here. So put a device in the ear canal that boosts every single frequency by 40 dB, amplify the sound. So a hearing aid works pretty well for these people with this type of a hearing loss. This type of a hearing loss is called a conductive hearing loss, because it's in the conductive mechanism to conduct the sound from outside your body into the inside of your body. It's a conductive hearing loss. So that is the job of the middle ear-- to ensure efficient transmission of sound in air into the fluids of your body. And without it, you have a moderate to severe hearing loss. There's a disease called otosclerosis. "Oto" meaning hearing. My department at Harvard Med School is otology and laryngology. Otology and laryngology. And sclerosis means hardening or rocky or bony growths. And the surgery that happens-- sometimes around the stapes, bony growths can grow around it and fix the foot plate so that it can't vibrate anymore. So what's done for that is you take out the stapes, you take off the bony growths, and if you just put the stapes back in, often these bony growths grow again. So actually you take it out and replace it with an artificial stapes. And the operation is called a stapedectomy. The "stape" and "ectomy" means taking it out. You replace it with a prosthesis. It's a very successful surgery for otosclerosis, which is a conductive hearing loss. That's the job of the middle ear, and that's relatively easy to treat when there's a problem. Is there a function of the external ear? Well, a lot of textbooks say the external ear funnels the sound into your ear canal. But there is another function of the external ear that's more on the lines of localizing sounds using your external ear. These are examples of external ears-- our pinna. Everybody has a slightly different one. Who is this historical figure? Anybody? He was a president of the United States. LBJ, President Johnson. He was always caricatured by the political cartoon guys with these huge ears, big pinnae. Everybody has different shaped pinnae. It turns out that the external ear can help you localize where sound is coming from. Well, how can it do that? Well, if you have a pinna and you do this interesting experiment-- you take a microphone and put the microphone inside here. So here's the pinna. Here's the ear canal. Put the microphone out here, and start out with a completely flat spectrum, broadband noise. The noise is absolutely flat so that it has equal energy at all the frequencies. You measure it out there, and then you move your microphone down here in the ear canal, maybe near the eardrum , and measure the spectrum again. So this is plotted in terms of gain with respect to free field. Free field is out here. Free field means basically in the room or in the environment. Now we're going to measure the spectrum down here and plot the gain. So anything above 0 is going to be higher than in the ear, and everything below 0 is going to be lower. Let's look at this solid curve here, which is minus 15 degrees elevation. Elevation of a sound source-- if it's straight ahead, it's zero. If it's minus 15, it's 15 degrees below zero. If it's above zero, it could be 15 degrees. On this case, it's 7.5 and 30. So elevations that are positive are above you. As that sound source moves around from being below you to above you, its spectrum changes, the spectrum way down here at the ear drum. And in particular, there are some very sharp dips or nulls in the spectrum that move around. It's thought that you can use those nulls as a cue to where this sound is. Now, what causes those nulls? Well, because the pinna is very complicated, you can imagine that some sound comes in and strikes the pinna and reflects off it. And maybe it reflects-- excuse my artistic abilities here-- maybe it reflects into the ear canal. Contrast that with other sound that comes straight in. Eventually, these two sounds are going to meet up at a point. And let's say this sound taking a longer time path went through half of its cycle. So now this sound, when it's starting to go a negative pressure, meets up with this sound, which came straight in and is starting to go in a positive pressure. Positive plus negative could sum to zero. And the geometry has to be just right, and the frequency has to be just right. But it can be just right at a particular frequency, and that's what causes the nulls. It's just a physical characteristic of two sound sources meeting up. It is thought, then, that you can learn the position of those nulls, especially, to be associated with positions of sound in space. And that's what was done in the researchers' report for today. These are some data from four different subjects. They tested the subject to localize sounds coming from in front of them. Left and right would be azimuth. That's plotted on the x-axis. Up and down would be elevation. That's plotted on the y-axis. And they move to sounds around to different places, and they said to the person, tell me where the sound is coming from. The answers that the subjects gave are in these solid, thick lines. The real positions were on the thinner lines. And each big individual data points are the small points, and the average data points from the subject are the big points here. So these subjects, when given a checkerboard of locations, they could pretty faithfully tell the investigators where a sound was coming from, both in elevation and in azimuth. These are data from four different subjects. What was done in the experiment is distort the pinna. How are we going to do it? Well, we could move our ear a little bit. What they did was they put in a little clay mold in parts of the pinna to change the shape, and they did that on both sides. As soon as they did that, these are now the answers from the subjects. Terrible in terms of elevation sensitivity, determining where a sound is coming from in terms of different elevations. Still pretty good in azimuth. There are other queues for sound azimuth that involve using two ears, which we're going to talk about extensively later this semester. The elevational localization was completely disrupted when the pinna shape was disrupted. Have these subjects go out for a few weeks, come back, get tested again. They re-learned with the pinna molds in how to localize sounds. This is an example of re-learning or plasticity. Now the pinna cues had different nulls because the pinnas were shaped differently. They could re-learn these new cues and associate them with the same old changes in elevation that we had before. So that's why it's called re-learning sound localization with new ears or new or distorted ears. So this is an example then, of subjects learning to associate these new cues with the old sound localization positions. So that's the take home message from this research report OK questions I can also do I get on Wednesday, we'll talk about the inner ear.
MIT_904_Sensory_Systems_Fall_2013
17_Cochlear_nucleus_Tonotopy_unit_types_and_cell_types.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last time, we talked about the responses of auditory nerve fibers and we talked about tonotopic organization and frequency tuning and response areas of auditory nerve fibers. So any questions about that? OK. So this time, today's lecture was going to go on. And the auditory nerve, of course, leads up into the brain. So we're going to talk about the auditory central nervous system, starting with the first nucleus in the CNS for the auditory pathway, which is the cochlear nucleus. And this nucleus gets its name because the auditory nerve is sometimes called the cochlear nerve. Right? It's coming from the cochlea. And so this is the cochlear nucleus. So can anybody give me a definition of a nucleus in terms of a central pathway? Not a cell nucleus. This is a nucleus in the brain. Just a collection of neurons. Right? So a nucleus is a collection of neurons in the central pathway. A ganglion is a collection of neurons out in the periphery. So you have the spiral ganglion in the cochlea, being where the cell bodies of the auditory nerve fibers-- it's called the ganglion there because it's in the periphery. And that nerve goes into the central nervous system and it ends on neurons-- a collection of neurons in the cochlear nucleus. In the part of the brain called the cochlear nucleus. We're going to talk in the cochlear nucleus about the various types of cells that are there. And instead of just having one or two types of auditory nerve fibers, as we had in the periphery, there's going to be a whole bunch of types of cells. And they have really fanciful names like octopus cells and pyramidal cells. So we'll talk about those. When we go into the cochlear nucleus and record with microelectrodes, we get single units. And there's a way of classifying those single units, which was developed here in the 1960s at MIT, and it was one of the first applications of computers to the study of neuroscience. Looking at the action potentials as a function of time, right after you turn the sound stimulus on. And by looking at those patterns, you can classify the units into various types. And now we know a good correspondence between the physiology and the anatomy, so we'll go over how we establish that correspondence. And because these are these various types, we really have one type of auditory neurofiber coming in from the periphery, and now we have a bunch of types. We're now going to think of the system as having multiple parallel pathways for the sense of hearing in the central nervous system. And perhaps one pathway helps you with one aspect of hearing, and another pathway helps you with another aspect. And maybe we'll end up at the end of today with a little bit of discussion-- advanced discussion-- of implants. So we're going to talk about cochlear implants next week. There is a kind of an implant called the auditory brainstem implant that's put in the cochlear nucleus. But here, things are so complicated. You have a variety of cell and unit types, each doing a different function. You can imagine how hard it would be to wire up a prosthesis to stimulate each one of those correctly. So just let's keep in mind the challenge of putting a prosthesis into the cochlear nucleus, which is required of people who lack an auditory nerve. They can't get a prosthesis in the cochlea because the message wouldn't be sent to the brain. They need to get a prosthesis into the cochlear nucleus. But it's very complicated here. And maybe it's as complicated processing as you have in the retina. Right? In the retina, you have the retinal ganglion cells being very complicated. Some have center surround fields. Right? There's not just turning on and turning off. There's turning on, there's inhibiting the on, and so on and so forth. So the cochlear nucleus I think of a little bit as like the retina equivalent in the visual system. OK, so let's look at the auditory central nervous system in terms of a block diagram on the top and in terms of what it looks like when you cut sections through the brain. So here's the block diagram. And we've been talking about the cochlea here, and the spiral ganglion is where the cell bodies of the auditory nerve fibers are. This would be the arrow of the auditory nerve coming from the spiral ganglion into the CN, the cochlear nucleus. And of course you have one of those on the left side, and another on the right side. So this diagram is showing both sides of the central pathway. From the cochlear nucleus, the neurons of the cochlear nucleus send axons to a variety of places. One of them is in the region of the brain called the pons. And in the pons there is a complex called the superior olivary complex. And it's abbreviated SOC here. So what's a complex? Now, we've had a ganglion, a nucleus, and a complex. Well, as we'll see in a couple of weeks in the course, when we talk about the SOC, it has a whole bunch of nuclei very closely spaced within it. Maybe a dozen or so different nuclei. Each of which is a different collection of cells. So the whole thing is called a superior olivary complex. And superior olive? What's that part of the name? So has anybody heard about the olive? Or the inferior olive? What's the olive? We should talk about it. Nothing? Anybody? Any ideas? So it's involved in the motor system. And if you cut a section of the brain-- it's not illustrated here, but if you look at a section of the brain in transverse sections, which are the kind of sections that we have here-- you find a structure that's kind of gotten wrinkled edges and sort of has a central pit, and it looks like an olive. At least it did to the first neuro-anatomists, maybe who had their eye on the clock, and were waiting for the 5 o'clock dinner call. They said, hey, that looks like an olive. And they probably didn't have very good microscopes, so they were looking at things at very low power. And what happened later is somebody discovered another complex over here that was nearby but didn't look anything like an olive, but it was near the olive. And it was a little bit rostral in the brain, and things that are rostral are called superior. So they called this the superior olive, and this has become known as the inferior olive. And after they got even higher power magnification, they could see this was really a complex of nuclei, so they called it the superior olivary complex. OK, so this is motor. And this is auditory. And they just happen to be located near one another in the brainstem, in the area of the brainstem called the pons, down here. So these are sections of the human brain. So why do we cut sections? Anybody? AUDIENCE: To look at the anatomy? PROFESSOR: To look at the anatomy. Right. So a compound microscope that we usually use to look at the anatomy can't look at a big hunk of tissue. You have to cut sections. And the thinner section you cut, the better resolution you get. Because you can bring a very high-powered objective and look at very fine cellular detail. So cutting sections is what neuro-anatomists love to do. An example thickness of a section might be 10 micrometers. 100 micrometers is a very thick section. So you can imagine how many sections you'd have to chunk through to go through the whole human brainstem. That's why it's nice to work with small animals like mice. They don't have as many sections. Right? So, but these are sections of the human brainstem. And here, this word says spiral ganglion, so this is the auditory nerve coming in. And this is the cochlear nucleus. And I've heard it described that by the time the sense of hearing came along, the brain was formed. Primitive animals had the brain, and all the other functions like motor. So there wasn't any room in the middle. So it was stuck on the outside. The sense of hearing-- the cochlear nucleus, at least, was put on the outside of the brain because there wasn't any room for it in the inside. Cochlear nucleus structures tend to be superficial in the brainstem. This is the cochlear nucleus. And these are the axons crossing. Because some of the cochlear nucleus axons cross from the left side to the right side. And the superior olivary complex is deep within the brainstem. But some of the cochlear nucleus axons don't go to the superior olive. They bypass it and go all the way to the IC, the inferior colliculus. That's at the level of the mid-brain. So this is the inferior colliculus. So you've studied a collicular structure before. What was it? Right? In the first half of this course. The superior colliculus, right? So why is it superior? Is it better? No. It's just a little rostral in the brain. So if you cut sections from caudal-- so this is the spinal cord down, and your microtome's chunking through the medulla, and the pons, and you get to the midbrain. And then you cut through the inferior colliculus, which is right here, and then a little bit more rostral, you cut through the superior colliculi, which are right here. And they tend to be a little flatter. These inferior colliculi are more dome-like shaped. So what does colliculus mean? Anybody know French? Who knows French? Come on, somebody knows. OK, who knows Latin? OK, well, that's the expected result. Colliculus means a little hill. And this lives up to its name. These are little hills right on the top of the brainstem. The Inferior colliculi are here, left and right, and the superior colliculi are here. So what is the superior colliculus-- what does the superior colliculi do? What happens when you stimulate in the superior colliculus? Your eyeballs move, right? So the superior colliculus is intimately involved in saccadic eye movements. Right? You guys must have talked a lot about that in the first part of the course. And the inferior colliculus, what does it do? Well, I've heard people who have spent their whole research career studying the inferior colliculus, and they say, well, I don't know what it does! That's not an uncommon statement in the sense of hearing. So the inferior colliculus certainly is a meeting place for lots of axons coming up from the cochlear nuclei, from the superior olivary complex. It's a midbrain meeting center. It's certainly involved with many parts of the sense of hearing, but we don't know exactly what role it plays. Certainly not as simple as it moves ears, OK? Whereas the superior colliculus moves eyes. All right, coming up the pathway even further, the next level is the thalamus, where you have, for the auditory system, the medial geniculate. And what was the analog in the visual pathway? A lateral geniculate, right? OK, so in the brain, medial is toward the middle and lateral is a little bit toward the lateral part. You don't have the lateral geniculate in this section, but it would be a little bit lateral to the medial geniculate. And what does geniculate mean? AUDIENCE: Knee. PROFESSOR: Knee. Right. A genu is a knee. And the lateral geniculate-- if you use your imagination and if you were an early neuro-anatomists with a low-power lens on your microscope, it looked like a bent knee. It's sort of bent. The medial geniculate is not that at all. It doesn't look at all like a knee. But it's just medial to the lateral geniculate, so that's how it got its name. OK, very good answer. What is to genuflect? AUDIENCE: To bend. AUDIENCE: Kneel. PROFESSOR: That's right. To kneel down, bend your knees. Ah, king! Ruler! All right. And then finally we have the auditory cortex, the AC, in those boxes at the top. And the auditory cortical fields are in the temporal cortex on the sides of the head. So I'm pointing to my left temporal cortex if I could go through my skull. It's behind the temporal bone. OK, where were the visual areas? Back here. This is called the occipital cortex, right? In a completely different part of the cortex. And in the primate-- of course, humans are primates-- you have a big temporal lobe of the brain. And that's this lobe here. And the auditory cortical fields, for the most part, are on the dorsal surface of that. And you have to go down inside the temporal gyrus-- this big gyrus-- to look and see them, because they're not on the surface of the brain in the primate. They're down inside the temporal gyrus. Well, how does this look compared to the visual pathway? Let's say this is the retina here. OK? Where does the retina project? That's a great exam question! Where does the retina project? That's not a one answer question, right? In large parts of the LGM. That's right. Where else does it project? AUDIENCE: [INAUDIBLE] PROFESSOR: I don't know about that. Certainly projects to the superior colliculus, right? Aren't the x and y cells projecting to the superior colliculus? A few of the retinal ganglion cells project to the superior colliculus. You better ask Peter for sure. Obviously, I don't know much about the visual pathway. But so how does that look here? We have the cochlea, and even the cochlear nucleus. Does it project to the geniculate? No, not at all. Between the periphery and the thalamic level for the auditory pathway, you have the cochlear nucleus. You have the superior olivary complex. And you have the inferior colliculus. And these structures are on the main highway up to the cortex. It's not as if the main pathway bypasses the inferior colliculus. This little side arrow is just a few axons. The main pathway is cochlear nucleus to superior olivary complex. And, to a certain extent, cochlear nucleus to IC. Superior olivary complex to IC. And then to medial geniculate. So it's a big difference What about the somatosensory system? You have a dorsal root ganglia in the somatosensory system, and they project actually all the way to the somatosensory regions of the thalamus. So that's more like the visual system. There's something different going on here in the auditory system. There's a whole bunch of brainstem and midbrain nuclei in the auditory pathway that you don't have represented in these other systems. So we're going to learn, in about two weeks, about the superior olivary complex and the inferior colliculus. These extra brainstem nuclei are very involved in the process of sound localization. And that's because for the auditory system to figure out where a sound is coming from, in best case we use the two ears. And we use cues like the difference in time of arrival of a sound at the two ears, and the difference in sound level at the two ears, to figure out where a sound is coming from. Certainly in the azimuthal plane. And in the visual system, and in the somatosensory system, the localization of where the stimulus is coming from is mapped right at the periphery. So you know that a spot of light is coming from over to your right because it hits your nasal retina in your right eye, and your temporal retina in your left eye. So that position information is already available in the periphery. In the sense of hearing, what does the periphery map? It maps sound frequency, right? So obviously, sound frequency is a very important characteristic. It allows you to identify the sound. But it doesn't help you to tell where the sound is coming from. And if you're a mouse running away from a cat, and it's night time, and you can't see the cat, you need to know where the cat sound is coming from. Is it coming from your right, or is it coming from your left? And it takes a lot of brainstem processing, comparing the inputs from the two sides, to reconstruct where the sound was coming from. Especially if you're doing it by intra-aural time and intra-aural level differences. So all of this brain stem processing-- or much of it-- is devoted to figuring out where the sound is coming from. It's figuring out the location of the sound. So we'll spend a good week or more on that, in a couple weeks in our course. Because that's one of the things you can really sink your teeth in, in the auditory system. The thing it does is to figure out where the location of the sound source is. OK, now we're going to spend the rest of the lecture today on the cochlear nucleus. The very first nucleus in the auditory pathway. And we're first going to talk about basic things like the anatomy of the cochlear nucleus and its tonotopy. So this is a drawing of the cochlear nucleus in the so-called sagittal plane. We should all know what the sagittal plane is. Can anybody explain it to me? Or can anybody explain? Yeah? Right like this. So I like to think of it as the zodiac character Sagittarius. Right? Who was Sagittarius? AUDIENCE: [INAUDIBLE]. PROFESSOR: He was the what? AUDIENCE: He was an archer. PROFESSOR: Yes, he was an archer. Happened to have a horse behind him. Or for his behind. So he shot the arrow, and if he shot it effectively, he'd hit me right in the center of my head, and my two halves of my brain would fall apart. And if you picked one half up and looked at it, you would a sagittal section. OK? Of my brain. And the cochlear nucleus, as we saw before, is hanging off the side of the brain. But if that archer shot-- instead of right at the midline, shot a little off-center, and the cochlear nucleus half fell away, you'd get this sagittal section of the cochlear nucleus. And so the cochlea is down here and the auditory nerve is coming up into the cochlear nucleus. So what is up on this section? Well, it's going dorsal, going higher. So here's your compass. The cochlear nucleus is ventral. The auditory nerve climbs up dorsally into the cochlear nucleus. And that compass gives you two clues about the two big divisions of the cochlear nucleus. This biggest part of the cochlear nucleus is called the ventral cochlear nucleus, because it's down ventral. The VCN is the ventral cochlear nucleus. And the other part is the DCN. That's the dorsal cochlear nucleus. This is the part that comes up dorsally. And they look different. If you look at the VCN, it looks like sort of a homogeneous mass of cells without a huge amount of organization. But as you can see by these dash lines here, the DCN has layers in it. And people have pushed the idea that the DCN is like a mini-cerebellum. | know, the cerebellum is at the back of your brain. Cerebellum means it's a part of your brain that deals with motor functions. And has these beautiful layers. Certain kinds of cells are in each layer. And there is an analogy between the DCN and the cerebellum. It certainly works as far as the layers goes. Some of the cell types are the same. You have lots of little granule cells in the cerebellum and the dorsal cochlear nucleus. You have pyramidal cells in layer two. The ventral cochlear nucleus, by contrast, is very homogeneous. Now this slide shows you, in sagittal view, not only the cochlear nucleus and the subdivisions, but it shows you some labeled auditory nerve fibers coming in, in the auditory nerve. So we talked about labeling before. We talked about how you can put a microelectrode filled with neural tracer in the auditory nerve and get the tuning curve and the characteristic frequency, or CF, way down at the tip of the tuning cure. And then you can inject a neural tracer. And the last time, we talked about labeling and tracing the auditory nerve fibers in the cochlea to the inner hair cell that it contacted. But you can just as easily go the other direction. Neural tracers fill the entire neuron. So these fibers were labeled, and now we're looking at their central trajectory into the central part of the auditory nerve and into the cochlear nucleus. And it's pretty hard to read, but if you could read them, these arrows and the numbers following them indicate the characteristic frequency of each of four different labeled auditory nerve fibers. This one way down here has a CF of 0.17 kilohertz. This one is a CF of 2.7. This is a CF of 10.3. And the top one, I think, is a CF of 36 kilohertz. And you can easily see a progression from low CF's to mid CF's to high CF's. So 36 kilohertz is way beyond the upper limit of human hearing, but in this experimental animal, which was a cat-- cats hear up to 50 kilohertz, at least an octave above the upper limit of our hearing. And they have auditory trainer fibers that respond quite well at 36 kilohertz. So you can see that there is a organization, and this organization could be called a tonotopic organization-- for the projection of the auditory nerve onto the cochlear nucleus and you can bet, than, that if you were to explore around here-- in the DCN that we've been talking about-- if you were to record here with an electrode that's sampled not from the auditory nerve fibers but from the cell bodies of the cochlear nucleus. And you can design electrodes to record from either fibers and no neurons, or neurons and no fibers. If you were to use the latter, and sample from the neurons here, you would expect them to respond to low frequencies. Whereas if you were to record from way up there, you'd expect the neurons to record and respond to high frequencies. And in fact, that's what happens. And these kinds of electrode mappings of the cochlear nucleus have been done for many years. These are data from the '50s from the University of Wisconsin. So what's done here is an electrode is put through the cochlear nucleus and each 100 micrometers or so, it's stopped to record from, in this case, cochlear nucleus cells. And their CF's are indicated by the many numbers here above the recording track. And the CF, it looks like, started at 0.5 and quickly went up to 4 and went higher and higher and higher. And then went through a region of the cochlear nucleus where they didn't sample any neurons. And then they went through high CF's and went back down to low CF's. OK? Because they went into a part of the ventral cochlear nucleus here, where the tonotopic mapping was a little bit different. And you can see how an electrode might record from a different projection of frequencies, if you had the angle right, going from the dorsal cochlear nucleus into the ventral cochlear nucleus. Because, as you can see, the branches of the auditory nerve fibers are quite complicated here. They all come up and they do a bifurcation. One part goes into the VCN over here. Another part goes through the VCN over here. And then finally up into the DCN. OK, so the tonotopy of the cochlear nucleus is quite complicated. There isn't just one tonotopic axis. DCN has one, and in the PVCN you can have at least one. But this tonotopic organization means the cochlear nucleus neurons are also tonotopically organized. So we've transferred tonotopy from the periphery through the auditory nerve and entered the brain in the cochlear nucleus. And if you go back to the block diagram we just had, cochlear nucleus projects onto the superior olivary complex nuclei in an organized fashion. So you have such tonotopic mappings in the SOC. The SOC and cochlear nucleus project to the inferior colliculus, and it's tonotopically organized. Inferior colliculus projects to the thalamus and the medial geniculate, they have beautiful tonotopy. The thalamus projects to the auditory cortex. And you have at least a half a dozen cortical fields that have beautiful tonotopy organization. So this tonotopic organization is a fundamental organizing pathway for the auditory central nervous system. And it means, basically, that you're going to process certain frequencies of sound over here and other frequencies of sound over here. Keep the processing of different frequencies separate. And so why would we want to do that? Well, it's a matter of debate, actually. There's speculation on that. But as we'll talk about during sound localization, you have different frequency ranges for the cues that are involved in timing between the two ears and level differences between the two ears. Some work best at low frequencies, and others work best at high frequencies. So maybe that is the idea, that you want to process those cues for the location of sound in different places in the brain. So tonotopic organization is typical of almost all nuclei in the auditory pathway. So you could ask the idea, are there other mappings in the other directions? And it's not clear, in general, whether there are. Along the cochlear nucleus in this dimension there are different types of cells that we'll talk to. But we should remember that there's a third dimension coming in and out of the screen here. What dimension would that be? That would be medial- lateral. So lateral might be close to you, and medial would be far away. So that actually brings up the idea that we're looking at two-dimensional pictures of three-dimensional structures. And so I just brought in this reconstruction, or this model, of the cochlear nucleus to remind us of that. This is the cochlear nucleus on the left side. And I believe this is from a cat. This is many years old. And you can see, in black, some nice little cells here that are lined up in the layers of the dorsal cochlear nucleus. So right here are the layers of the dorsal cochlear nucleus. And much of this is up dorsally here, so I should explain to you what plane of section we're looking at here. This is the left cochlear nucleus, so it would be on my left side. And now, these are horizontal sections. OK? So for the compass for horizontal sections, dorsal is up. Ventral is down. Rostral is this way, forward in the model. And caudal is back. Lateral is to the left. And medial, where the rest of the brain is, on the right side, would be over the right here. The cochlea is ventral to the cochlear nucleus. And this is the auditory nerve coming up. And the colors are meant to represent the different CF's of the auditory nerve. So down here ventrally, we have the orange, low CF's. In the middle we have green, which is the middle CF's right here. And very dorsally, which would be way up in the top of the screen in the DCN, you have yellow, so that's the high CF's. And so that's the model. So that's the DCN right here. And down here would be the VCN. And so this is a model where there's 20 or 25 sections or so, but many have been skipped. In a typical cochlear nucleus, if you were to cut it 50 micrometers you might have a couple hundred sections. So I'll just pass this around. But there are two other colors besides the three that represented the auditory nerve fibers. I think-- what did I talk about? Red, green, and yellow. Right? So there are two other colors. There's orange, and there's blue on there. OK, so what's that? Well, the auditory nerve is not the only thing that's providing input to the cochlear nucleus. Now you say, oh, this guy made a mistake, because on the block diagram it showed the only arrow going into the cochlear nucleus is from the spiral ganglion, the auditory nerve. All these other arrows are coming out of the cochlear nucleus. So what are the three other colors that are not the auditory nerve? Well, this is a diagram of the so-called "ascending" auditory system, from low to high. Right? The cochlea is the lowest. The cochlear nucleus is the next. And then we went up-- chung, chung, chung-- all way to the auditory cortex. Well, it turns out, in all sensory systems and all pathways there are some direction and information processing that goes the opposite way. Sometimes that's called the descending auditory system. So there's, for example, information that starts in the cortex and goes down to the medial geniculate. And those other colors in the cochlear nucleus represent inputs coming into the cochlear nucleus that are coming from higher centers and going down to this lowest level of the auditory pathway. So those are so-called "descending" inputs. And those are not very well explored. Obviously, we know what ascending input is doing. It's telling the brain there's a sound and it's processing that. But why would the brain want to send information down to the lower levels? Well, there are lots of theories. We'll talk about them toward the end of our course. But it has to do with the brain controlling inputs coming into it. And there even inputs from the brain going out as far as the cochlea out to the auditory periphery. So those are what those other colors are in the model. Now we get to the part of the lecture where the cochlear nucleus becomes very, very complicated. The anatomical cell types. And this is where Dorothy might say to Toto, "I don't think we're in Kansas anymore." OK? Everything was really simple in Kansas. In the auditory nerve-- sorry if somebody's from Kansas, here-- but the auditory nerve is very simple and the cochlear nucleus becomes infinitely more complex. And so here is an example of how complex it is. This is the auditory nerve coming in and making its bifurcation to go into the VCN and into the DCN. And those drawings we had before were just sort of stick figures. And they didn't show all these nice little endings onto this whole variety of cochlear nucleus neurons here. So how many cochlear nucleus types are there? There's about 10 of them. So that brings me to today's reading. I guess I'm developing this tradition here of bringing a different book every lecture and reading from it. So this won't take long. This book is called The Primary Acoustic Nuclei. So that's another word for the cochlear nuclei. And it's by Raphael Lorente De No, who was a Spanish neuro-anatomist at first, and did a lot of work on the cochlear nucleus. And then during the Spanish Civil War, he said, well, I better get out of here. So he came to United States and did a lot of work on axonal conduction and physiological measures. And it says in the introduction that he put away his anatomical drawings ever after maybe publishing one or two papers. And he had a whole big file of them in a manuscript, ready to go, and he stuck it away in a closet for about 30 years. He dug it out, and finally published this. So a lot of this stuff was done in the 1920s and '30s and later published. In fact, the copyright date for this book is 1981. And the inscription in the front of the book says December 20-something, 1980. So this is important, because this is a book owned by Nelson Yuan-Sheng Kiang, who started this course, 904, with Dr. Schiller back in the 1980s and '90s. Nelson was very good friends with the author, here, and when the author published the book he promised Nelson to give him the very first copy that came off the printing press. So this is the first copy in the original run. It's not only a first edition, it's the first of the first edition. And the inscription here-- Nelson put December 1980-- is actually a year before-- a month before the copyright date. So this really, really a special book. So these pictures here, as you can see-- you can judge this book by its cover. These pictures are the beautifully complex incoming auditory nerve fibers-- which are drawn in orange here-- bifurcating and showing the many, many types of endings on the cochlear nucleus cells. And that's summarized by the reading here. This is the very first sentence of the book. "Each neuron in the ganglion of Corti"-- that's the spiral ganglion-- "gives rise to two nerve fibers, which, according to their destination, are either called peripheral"-- those are the ones that are contacting the hair cells we talked about last time-- "or central"-- which we were talking about right here. "The peripheral fibers, at times called dendrites, penetrate the organ of Corti. They form sensory or afferent endings in contact with the hair cells. The central fibers, after a relatively long trajectory in the cochlear nerve, enter the primary cochlear nuclei, where they form elaborate sets of endings." OK, that's the reading for today, and we pass it around. You can enjoy the pretty pictures. So these are the elaborate sets of endings on the elaborate variety of cochlear nucleus cells. And one of this type of elaborate endings is called a large end bulb. That's the very top one there. And the other name for an end bulb-- so, it's large end bulb-- it is the end ball of Held. Because Held was the early German neuro-anatomist who first noticed this huge ending. So this ending is so big that you can hardly see the cochlear nucleus cell that it envelops. Here's another drawing of the large end bulb, and there's a scale bar. The cochlear nucleus cell is 20 micrometers in diameter. OK? So you could say that's the largest ending in the cochlear nucleus. Or you could say that's the largest ending in the auditory pathway. Or you could say that's the largest ending in the brain. And you'd be right with all those claims. So this is a huge ending from the auditory nerve fiber onto the cochlear nucleus cell, way at the tip. And we can just go back to the previous drawing. These are stick figures where most of the endings are dropped off for the purposes of clarity. But way out of the tip of the auditory nerve fiber these end bolts of Held are so big that you can see them even with this really low-magnification picture of the auditory nerve fibers. OK And they go onto a particular type of cochlear nucleus cell, called a spherical cell. And in parentheses, that type of cell is called a bushy cell. Why are there two names? Well, this is kind of a long story, but there are two main techniques to look at-- or there were, in the 1950s and '60s-- to look at cells in the brain. And one of them was to use a Nissl stain. You can cut your section of the brain, cut your section of cochlear nucleus, put your Nissl stain on it, and you can see very beautiful staining of all the neurons in the cochlear nucleus. Some of them look round. The central nucleus. And they were called spherical cells because they're so nice and round. And this work was done by a pioneering neuroscientist who everybody before-- didn't draw this very well-- everybody before thought all the cochlear nucleus cells just look like one type. They look about the same. This neuroscientist, whose name is Kirsten Osen, was the first person in the 1950s to look at sections of the cochlear nucleus and say, oh, those actually don't look exactly the same! Some are spherical. Some look like octopus. Here's a cell Kirsten called the octopus cell, because it was sort of eccentric, and these little appendages coming off of it-- which were the dendrites-- all came off from one side, like the tentacles of the octopus come off from one side of the octopus. And this big nucleus, right in the center, to her looked like the eye of the octopus. Right? Some of the cochlear nucleus cells were multipolar. Here's one called the giant multipolar cell of the DCN. And a pole just means that something came off the cell, and those are the dendrites. OK? The Nissl stain doesn't stay in the dendrites very well. If it did, the whole cochlear nucleus would be black because there are dendrites everywhere. But it stains the cell body very nicely. So what does the Nissl stain stain? It stains DNA, RNA, and some protein. Which are, of course, found mostly in the cell body of the cell. So it gives you a good look at the cell body of the cell. Not a good look at the dendrites, because they don't have as much DNA and RNA. And hardly any look at the axon. The axons are invisible. But for classifying cells, it's really great, because you have every single cell stained, and you can look at them. Here's a cell called the globular cell. And that is a little bit like a spherical cell, but it's more oblong. Here are some other multipolar cells. Here's a cell called the pyramidal cell. So I should have said that the Nissl designation of these cells is given non-parenthetically. OK? So you have, by Nissl, spherical cells, multipolar cells-- here's another kind of multipolar cell-- globular cell, octopus cell, and pyramidal cells. So those are the cells I would like you to know. Those of the important types of cells. Let me write that down. The view graph is a little bit more-- it makes more distinctions than I care to make for this class. So you have spherical. These are cells in the cochlear nucleus. Spherical cells. You have multipolar cells. You have globular. You have octopus. And you have pyramidal. Right? I mean, since scientists are classifiers, you have sub-types of each of these. OK, you could just keep sub-typing as much as you want to. But these are the main types that we want to pay attention to for our course. And these are Nissl stained. Now along comes another person and wants to make his mark on science, and says, I'm going to use a different type of stain and look at the cochlear nucleus in a different way. And his name was-- his name is-- Kent Morest. And he worked at Harvard for quite a while, and then went to the University of Connecticut. And so his stain that he used was called the Golgi stain. And many of the pictures in the book I passed around were Golgi stains of fibers. He wanted to look at Golgi stains of cochlear nucleus cells and compare them to the Nissl stain. So what does the Golgi stain do? Does anybody know? Or who was Golgi? Yes. AUDIENCE: Golgi stains the whole cell, but it doesn't stain all cells. PROFESSOR: That's correct. In fact, in the ultimate case, you can put your brain in the Golgi solution and come back a couple months later and you could just have one neuron in the entire brain stained. But the good thing about that is that the neuron cell body, it's dendrites, and a good portion of its axon pick up the stain and they stain it jet black. So the Golgi stain takes a spherical cell and transforms it into something that's really beautiful. The cell body is completely filled. It's black. I should be using black. This is a black stain. And you can see, coming from the spherical cell, one major dendrite. It's a big one. And it, very close to the cell body, ramifies into thousands of little bitty dendrites. Like this. Where this scale might be 20 micrometers. And so the dendrite is sticking very close to the cell body. To Kent Morest, this looked like-- he must've been planting stuff in his yard, because this looked like a bush. You go to the nursery, and you pick up a bush. Here's the top of the bush. This is the trunk. And this is the root ball the nursery people bundle up for you, or this is in a pot. So this looked like a bush. So he called it a bushy cell. And so in parentheses on the view graph there are the designations that are given to these cells in the Golgi stain, which is all in parentheses there. It looks completely different, because you're staining different parts of the cell. So let's make our little chart. Spherical cells in the Golgi stain are called bush. We'll put these in parentheses. Bushy cells. Multipolar cells are usually called stellate because now, instead of just the beginning of the dendrite coming off, you have long expanses of dendrites that go for 500 micrometers from the cell. They can go across the cochlear nucleus. And they look like beautiful, twinkling stars. At least they did to Kent Morest. Globular cells look exactly like spherical cells in the Golgi stain, and they were given the same name. Bushy cells. Octopus cells are so clear, what they are, they're given the same name. Pyramidal cells are sometimes called fusiform. Fusiform means sort of spindle-shaped. They just look different in the Golgi stain. Now, this is sort of old stuff. And so now, when somebody talks about a certain type of cell in the cochlear nucleus, they use a hybrid terminology. They'll say, "I was recording from spherical bushy cells." So they use both names. Or, "I think I was recording from globular bushy cells." Or, "My study is on multipolar stellate cells." OK? Or, "These cells in the DCN are pyramidal fusiform cells." So they use a hybrid terminology and they concatenate, if you will, the two. OK, so these are the cell types, if you will. Now let's go into the cochlear nucleus and record, with recording electrodes, a sample not from fibers but from cell bodies. And see what we get. I think that's what's next here. Yes. OK. Where are these data from? From Pfeiffer. So Pfeiffer worked with Nelson Kiang, who owns that book. And they worked at MIT during the 19-late-50s and 1960s. And these were, again, some of the first applications of computers to neuroscience. They said, well, what we want to do from these cells to classify them-- well, maybe they used a whole bunch of different schemes and all of them didn't work out very well, except this one. They said, for this one, what we're going to look at for these units in the cochlear nucleus, is not the tuning curves, not their responses to this or that. We're going to look, very shortly after we turn the tone burst on-- the tone burst is the stimulation-- we're going to look at the timing of spikes. And so these are in the order of millisecond intervals. In fact, some of the bins have width less than a millisecond. OK? They want to look very precisely at the time pattern of spikes after you turn a sound on. So tone burst. What is a tone burst? I've used that term. So it's just-- I think we had, earlier in the course, a tone pip. So a tone burst is just sounds that to do this. So it's a burst of tone. Burst of a pure tone. And it looks like the duration here is 25 milliseconds. OK. And all these neurons are tuned, so you want to use your sound frequency at the CF. So within the tone, versus the CF. You want to get spikes coming from the neurons. So you want to be at CF. You want to above threshold. You turn your tone burst on, and then you have your computer look at the spikes and say when they occur in time. OK, so this is pretty important. So I have a pointer here. So how did this experiment work? Well, here's data from one, two, three, four different cochlear nucleus neurons. And from each neuron there are two types of displays. This display on the top is called a dot raster display. And each little dot signifies the time of occurrence of a spike, an impulse, from the recorded neuron. And on the Y-- down the X-axis is the time axis. So starting at zero is when the tone burst went on. Tone burst goes off at, looks like, 40 milliseconds here. And then there's an off time when there's just silence for the last 40 milliseconds. And the important thing about this is that this column here shows you that there wasn't just one tone burst presented, but there were many. And these experiments can use 600 or even 1,000 tone bursts. Each time the stimulus is turned on, you give it a stimulus number. So the very first tone burst is stimulus number one. And that's the first dot raster here. And the neuron fired a dozen or so action potentials at those times. And then it stopped firing, except for a little bit of spontaneous activity. Then tone burst number two was presented. And the neuron's dot raster for number two is the next line down. Then you go through your silent period, and go back to the beginning. Tone burst number three was presented. And you get this pattern of spikes. It's a little bit different. Each one looks like it's a little bit different. Down here is tone verse number 15. OK? And you get that dot raster of spikes. And then you go through your hundreds of tone bursts. And this might be tone burst number 999 down at the bottom, and you get that raster. And you keep track of all those dots. And you have the computer put them into bins here. And this histogram shows you the bins where the number of action potentials is along the Y-axis now. And the X-axis is still displaying time. And this horizontal line shows you when the tone burst was on. So this type of histogram, which is compiled from many stimulus presentations and many dot rasters, are sometimes called the PST histograms. And that stands for post- or peristimulus time. Obviously, it's a time histogram. Obviously, there's a stimulus. Why is this a post? Well, the terminology started up when you, instead of having a long tone burst, have a click. So that when the click goes on, it goes off real quickly and everything is post- stimulus. So it's really, in this case, a peri-- around the stimulus-- time histogram. Does everybody explain-- Does everybody know what I'm talking about here? About what this display is? OK. So you can think of this PST, then, as reflecting the average firing rate as a function of time for the neuron. And quickly, you're not turning the tone on for many seconds. It's within a few milliseconds. Within the first 40 milliseconds of when the tone burst is turned on. And it's an average response. And here's data from one unit. Here's data from another unit. Now, even in the dot raster you don't need to worry about averaging here. This type of firing is fundamentally different from the type of firing we just talked about. It looked like on every single tone burst there was a kind of a different firing pattern. Sort of random. As if you took your pepper shaker and just spread out some salt and pepper here. There wasn't a real organization to it. Sure, there's a higher firing during the tone burst, after a little latency. But over here, there's a real nice pattern to it. And it looks like in this unit, which was called the Chopper unit, even from the very first stimulus number, it went "pop, pop, pop, pop, pop, pop, pop" if you slowed way down. Same for the second. Same for the 15th. Same for the 999th. This Chopper unit has a very organized and precise temporal pattern of firing. When you do the averaging and get the PST, you can see this very nice so-called chopping peaks in the PST histogram for this Chopper unit as compared to this other unit. Here is a unit where you can also see, from the dot raster display, that this unit fired a spike and took a substantial pause-- which gives it a name, the Pauser unit-- before it started firing again. And here is the PST histogram from a so-called Pauser unit, where the pause is a substantial pause. It's 10 milliseconds or so. And finally, here's a unit called an Onset unit, which fires only one spike at the onset of each and every sound stimulus. Each and every tone burst. And obviously gets its name, Onset unit. Now, let's go back to this first unit. Why? I glossed over the name. Why is it called the Primarylike unit? Well, if you go and record from primary auditory nerve fibers-- and where does that terminology come from? Well, in the auditory system you have the hair cell, you have the nerve fiber-- so the nerve fiber is the very first, or primary neuron. Then you go into the cochlear nucleus and you have the secondary neuron. So really, this is a secondary neuron. But the pattern, the PST pattern, that you get in some of these cochlear nucleus neurons looks just like primary auditory nerve fibers. Except they have a little bit longer in latency. So this one is called Primarylike unit. It's not a primary. It's a secondary neuron, but it's like. Primarylike pattern. OK. So in the cochlear nucleus, you have these different types of firing patterns. And let's make a list of them. Maybe I'll make the list over here. So these are the unit types. Primarylike. Chopper. Pauser. And Onset. Those are the four basic types. Now, again, scientists are classifiers, so you-- if you read the literature-- you will find some units called Primarylike with notch. So those are Primarylike units, but right after the first peak they have a little notch. Don't worry about that. It's basically a Primarylike unit. Now, we have two interesting classification schemes here. One is how the unit responds to sound. One is how the neuron looks in the microscope. Can we make a correspondence between the two? Now, one possible outcome could be, "There is no correspondence." Right? Could be that spherical cells can produce any of those kind of responses to sound. Well, obviously, I wouldn't be devoting a whole lecture to this if that were the outcome. And in fact, the cochlear nucleus is a place where we can really correlate anatomy of the cells with their physiological unit types. And how is that done? Well, there are several different ways. And this graph shows you one way. That is by going to different parts of the cochlear nucleus. Where the cell types are not equally distributed, you find that the unit types are not equally distributed. And this works very well for certain areas of the cochlear nucleus. For example, this area right here called OC. That's the area called the octopus cell area. What's found there? That's basically where you have all the octopus cells. They're found in a particular part of the ventral cochlear nucleus. OK, let's go record there with our electrodes. And what kind of PST pattern do you get there? Well these are numbered types. Where number one is Primarylike. Number two is Chopper. And these are sub-types of Onset-- three, four, and five. And number six is Pauser. In the octopus cell area you get Onset responses. OK, so let's start drawing some lines. Octopus cells then produce Onset patterns. Now, that's true. You can really sink your hat on it. How about going the other way? Do you always get Onset response from just octopus cells? Or how about some of these other types? Well, going the other way, you do have a few multipolar cells that can produce the Onset type. And how is that shown? Well, octopus cells are just there, but you can get some Onset responses in various places in the cochlear nucleus. And I'll tell you how this dashed arrow was drawn in a minute. So Onset units are not only octopus cells. Octopus cells are only producing the Onset pattern. That's the correct way to say it. Right here. OK. Now, elsewhere in the cochlear nucleus, the cells tend to be mixed up. And you can't make very good regional distinctions-- in spite of what's said here-- with one other exception. In the dorsal cochlear nucleus there is a layer called the fusiform cell layer. And so it has the pyramidal fusiform cells. And there you get Pausers. And that's pretty much the only place-- well, not completely the only place-- you get some deeper in the DCN. But that's very good evidence that the pyramidal cells produce a Pauser type of response. You get no Pauser responses anywhere in the ventral cochlear nucleus. Now, why am I messing around with these messy data when, to a certain extent, most of the cochlear nucleus cells are all mixed up together? Well, you can build up these data from thousands of recordings, and you can look at hundreds of cochlear nuclei. You can assure yourself this is the only place that octopus cells are found. You can record from thousands of units and not find too many Onset responses elsewhere in the cochlear nucleus. So you have the strength of numbers behind you if you use this type of approach. We talked last time about single unit labeling. Why not apply that to the cochlear nucleus neurons? OK. The way you do that is you fill your electrode with neural tracer. You go in. You record the CF from the tuning curve. Then you turn a tone on and you get the unit type. Is it a Primarylike or is it a Chopper? That's a very elegant way to answer this question, and it's been done, but it's also extremely difficult. For some reason, recording from the cochlear nucleus neurons with the type of pipette [? electrode ?] that's necessary to have your neural tracer in it, it's very difficult. You get the recording and then you go inside the cell to inject it, and it tears a big hole in the membrane. And so the neuron stops responding. Have you lost the neuron? Are you still in it? You don't know. OK, so that's very difficult to do, but it has been done on this certain select number of neurons. And here's some data from single unit labeling. So here is from a study-- this is from a study by Bill Rhode et al at the University of Wisconsin, in which he recorded from cochlear nucleus, Primarylike neurons. That's clearly a Primarylike PST. He injected and labeled the neuron and recovered in the ventral cochlear nucleus. And there's what the labeled neuron looks like. There's a big cell with one primary dendrite, with lots of ramifications close by the cell. It's not as clear as the bushy cell I drew on the board, but clearly this is a bushy cell, because all the dendrites are close by. Now, I'm using the nomenclature from the Golgi stain because, clearly, that cell is stained in its entirety. The cell body is black. The dendrites, every little type of dendrite ramification is filled black. That big thread-like thing going right underneath the title labeled neuron is the axon. You can trace the axon wherever you want to. This is Golgi-like labeling that you get from filling these cells with neural tracer. Clearly, that's a bushy cell. And it had a Primarylike pattern. Now, if you look in all the literature, maybe you get about eight or 10 filled bushy cells. And people have spent many years trying to do this. You see papers published with just a half a dozen labeled neurons. It's very difficult to do. But clearly, that Primarylike pattern came from that bushy cell. This Chopper pattern-- this is a subtype of Chopper-- came from the stellate cell. Clearly, rather than one primary dendrite, it had a half a dozen. And the dendrites went forever. Here's a neuron that sent its dendrites all across the cochlear nucleus. Clearly a different type of cell-- a stellate cell. OK, so let's draw an arrow there for those two neurons. Bushy cells. Spherical bushy cells. We actually don't know that it's spherical-- can produce Primarylike. Stellates can produce Chopping patterns. And these other correlations-- octopus cells correlated with Onset, we already established. And Pauser types correlated with pyramidal fusiform cells, we already established. So we here have a nice chart of correlations between the anatomy and the physiology. And probably, this can be done much better in cochlear nucleus than any other center in the brain. Auditory system or non-auditory. OK? Now, you should be a little bit skeptical of me, as scientists, when I give you these really nice classification schemes. So what is really the difference between a globular bushy cell and a spherical bushy cell? You should be thinking, "Can that guy really make that distinction?" OK, sure, some things are spheres, and some things are oblong. But what about something in between? Are there intermediates in this classification scheme? So let me give you just an arbitrary-- we classify people by the color of their hair, right? There are blondes. There black-haired people. There are brown-haired people. There are red-head. But then you have a lot of intermediates, right? You have dirty blondes. You have people who have darkish brown hair, not quite black, but not quite brown. What are you going to do with all those intermediates? What are you going to do with people who are losing their hair? Problems with classification schemes are if you have a lot of intermediates. And then these things break down. Instead of being a nice, firm category they become really squishy, with a lot of intermediates between the two. It turns out, here, that that's not a huge problem, although it is sometimes a problem. The aficionados who do this for a living have metrics where you can measure the sphericity of a cell or the oblateness of a cell. You can measure things in the physiological response. Like if it's a pause longer than two milliseconds, it's a Pauser unit. But if the pause is less than two milliseconds, then it's a notch, or something else. OK. So it turns out that there aren't very many intermediates or things that are hard to classify in these two kinds of classification scheme, which is very important. Another important thing is what do these classifications then predict? If you're recording from a Primarylike unit, and you say it's one of these kinds of bushy cells, what does it predict? Well, one thing it predicts is where the cells project to. Now, that's given to you a little bit on this diagram here, by this category. That says acoustic stria for the efferent axon. And so now we're using terminology that is centric to the cochlear nucleus. Efferent means going out of the cochlear nucleus. And there are three major pathways going out of the cochlear nucleus. The ventral, intermediate, and dorsal. And different types of cells project in one or another, but not all three of these output pathways. So knowing the type of cell, knowing its response pattern, I can predict where the axon is going. So the predictability of the classification scheme, if it predicts things very well, means that it's a good classification scheme. But when somebody gives you a classification scheme, you should always be thinking, "Now, is it a good classification scheme? Or it is just something cooked up?" Meaning, are there lots of intermediates? So speaking of projections of the axons. This is what I want to end up with today. Especially in terms of the parallel pathways that we talked about at the beginning of today's lecture. So here's another even different type of diagram of the cochlear nucleus. This is the cochlear nucleus on the left side. This is a little bit of the cochlear nucleus on the right side. And so we have a whole bunch of the auditory pathway, the superior olivary complex, and the inferior colliculus more in the center of the brain. And now you can get a sense of why the superior olive is really a complex. Here are parts of it. Lateral superior olive. Medial superior olive. Medial nucleus of the trapezoid body. Lateral nucleus of the trapezoid body. And that's just part of the superior olivary complex. It's really a whole bunch of different nuclei all glommed together. The point of this slide, though, is the projections of the cochlear nucleus neurons, in terms of where they send axons to. So here are the spherical bushy cells, right here. That's its diagram here. And this little lending is supposed to be the giant end bulb of Held that they receive. The spherical bushy cell on the left cochlear nucleus projects up here into the left superior olivary complex. And one of its most important places to end up is in the medial superior olive. Part of the superior olivary complex. And it continues on and goes across the midline and also projects into the right medial superior olive. It's not diagrammed here, but analogous spherical bushy cells on the right side come in and do the same thing. The medial superior olive is the place-- that we'll study in about a week-- that receives input from the two sides from the spherical bushy cells. And it compares the timing of the inputs from the two sides. It says, oh, if I heard the sound on the left side a little bit earlier, then that sound source was located to the left side of my body. But if the MSO gets input from the right side first, it's almost certainly the case that the sound source was located on the right side of the body. And it'd activate the right ear and the right cochlear nucleus and its axons first. And by the time the sound leaked around to the left ear-- it a longer time to travel, here-- and started to activate the left pathway, it was a lagging signal and came into the MSO a little bit later. So the MSO, then, receiving input from these two spherical bushy cells from the two cochlear nuclei, does an interesting comparison and helps us localize the sound using timing differences. There's probably a pathway in here that begins with the globular bushy cells, where you use the differences in level at the two ears to localize sounds. That's two out of maybe 10 different types of cells that we really know what they do. So the other eight or so-- the pyramidal cells, the small cells, the octopus cells, the stellate cells-- we have no idea what they do. OK? So all the rest of these other parallel pathways, the function are unknown. We think they do something in the sense of hearing, but we're not sure what they do. So there's a lot of interesting information left to be gleaned from these parallel pathways coming out of the cochlear nucleus. OK? So that's what I want to end up with. If there are any questions, I'll take them now. I also want to remind you guys that next week, Monday is a holiday, so there won't be any class. So the next time we meet is a week from today, on Wednesday. And in that lecture, we're going to talk about deafness and hearing loss, and toward the end we'll have our demonstrator come in to demonstrate her cochlear implants. So that's an important class, not to be missed. OK, have a good mini-vacation.
MIT_904_Sensory_Systems_Fall_2013
23_Auditory_cortex_2_Language_bats_and_echolocation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. I guess we'll get started. Last time, we were talking about auditory cortex, and the tonotopic fields in auditory cortex, the non-tonotopic fields. Any questions about that first lecture on auditory cortex? We're going to continue on cortex today, and talk about some areas of cortex in a specialized mammal, the bat. It's where a lot of excellent work has been done on auditory cortex that really shows-- very nicely-- how neurons respond, at least, to the selective stimuli that are emitted and listened to by the bat. So we'll be talking about bat echolocation. We'll start out by defining the different groups of bats, and talk about who discovered bat echolocation. The discovery was made just a few miles from here. We'll talk about what the signals a bat's look like, in terms of what they look like on a spectrogram. Then we'll talk about the specializations for processing the emitted pulse and the return echo in several bat cortical fields. In the second half of today's lecture, we'll be revisiting speech sounds. We had a little bit of that at the very beginning of my lectures. We'll talk about speech spectrograms. And then we'll talk about cortical processing of speech and language, especially in the human, where we have a lot that is known about processing of language. OK. So we'll start out with bat echolocation. These are some pretty pictures of bats. Oh, I also have some announcements now that everybody's here. So, on Wednesday's class, meet at the Massachusetts Eye and Ear Infirmary, if you haven't already gotten an email. So, for a lab tour, we meet at the Massachusetts Eye and Ear Infirmary. And there are directions to get there from here. You just get on the Red Line, going inbound, toward Boston. Get off one stop later, at the Charles stop. And then you're going to the Massachusetts Eye and Ear Infirmary. So a lot of people get that confused, of course, with the big behemoth right next door , Mass General. So Mass. Eye and Ear's a different building. It's a different hospital. But But it's clearly marked. The directions are on the website, so just follow them. So the lab tour will be within the regular class period, so 2:35 to 4:00. We're not going to go beyond that because people have commitments after that. And we'll-- depending on how many people show up, it's likely we'll divide into groups and cycle through several demonstrations that I have prepared for you there. OK. So questions about that? So we meet at Mass. Eye and Ear on Wednesday. At that time, the assignments are due. So we talked a little bit about the assignment a few weeks ago, when we talked about the Jeffers model. And you can send me the assignments by email or give me a typed printed version. And the idea is that I'll look them over and then hand them back to you at the review session. And we'll talk about the assignments , and what I consider the right answers. So we did a little switch for the review sessions. So we should put this on the website, if it isn't already there. But next week, Monday, we have two review sessions scheduled. The one on Monday will now be the one on vision. And so Doctor Schiller is going to come and review the vision part of the course on Monday. And then I'll be back a week from Wednesday. And we'll do the audition review. And we'll return your assignments then, ? OK? So that's what's happening next week. Any questions on that? OK. So here are the nice pictures of bats. They're beautiful animals. They have specializations, of course, for hearing. They have large pinnae, right? Much, much larger than other animals, especially for their size. They have very small eyes. And their visual systems are not well developed. They of course have wings. So these are flying animals. And many of them have noseleaves. So here's a nose cartilage that's very well developed because these animals emit sound. Their echolocation pulse is emitted. And some of the sound comes out of the mouth. But some of it comes out of the nose. And this noseleaf tends to focus the sound forward because that's where the bat is interested at detecting some kind of a target, like the insect prey that most of these bats eat. So I should backtrack and say that we're really talking about three types of bats. We're talking about echolocating bats, of which there are two varieties that I'll tell you about it in a minute. And we're also not going to talk about non-echolocating bats. And sometimes, these non-echolocating bats are called fruit-eating bats. They are also flying mammals. But they have big eyes. They have relatively small pinnae. And they navigate around like birds and other mammals, using their visual system. So they don't echolocate. So you have non-echolocating bats that we're not going to talk about. We have echolocating bats that we will talk about. It's starting to get confusing with all these groups. In fact, bats are a very successful group of mammals. Supposedly, there are more species of bats than all other mammals combined. It's and amazingly successful group of mammals. And mostly because echolocation has opened up a whole new vista for bats. Not only can they fly around, but they can do so at night-- in total darkness-- and find prey, their targets, their insects. So instead of being fruit eating, these echolocating bats are carnivorous. Most of them eat insects that they catch on the wing. So flying insects. But there are gleaning bats that eat insects on the forest floor. There are vampire bats that cut little holes in the top of mammals and lap up the blood that comes out. There are fish eating bats that eat fish. There are a whole variety types of bats. But most of them eat insects that they catch on the wing. And we'll have a demonstration of that. So these are insect eating bats here. This one I want to point out its name for. This one is called-- in the middle left, right here-- megaderma lyra. So mega means big. Derma means skin, ? right? It's so named because of it's big skin here. And lyra refers to lyrical, or musical, or something that sings, OK? So these bats are singing. Let's look at the types of singing that they do. And this display shows the two types of signals that are emitted by the two big groups of echolocating bats. The first I want to start with is the simplest. It's called an FM bat. An FM-- you have an FM radio that stands for frequency modulated, or Fm. OK? And in this graph of the FM bat's echolocating signal-- this graph is called a spectrogram-- and it plots the frequency on the y-axis and as a function of time on the x-axis. And this echolocating pulse is the thing that the bat is emitting. It's producing and emitting. And the reason it's frequency modulated is it starts at a high frequency and modulates down to a lower frequency. And here's another one. And here's another one. Now, if there's a target out there, some distance from the bat, this pulse that goes out will be reflected off the target. And then it will come back to the bat in the form of an echo sometime later. OK? So the bat-- in this case the FM bat-- gets information, number one, if there's an echo, there's a target out there. And number two, the time between the pulse and the echo is an indication of the distance the target is from the bat, right? Because the sound has to go from the bat, to the target, and then from the target back to the bat. And we know that the sound velocity in air is about 340 meters per second. So knowing that velocity, and knowing the time between the pulse and the echo, we-- and the bat-- can get information of how far away the target is from the bat. Now, a couple of things I want to comment here on this spectrogram. Number one, the frequency axis. You might not be able to see it, but it starts at zero and then quickly jumps to 30 kilohertz. And then there's 60, 90, and 120. So those are very high sound frequency. And so, in the early days and to a certain extent still, those frequencies would be called ultrasonic. And there's no real good reason for that, except that we're humans. And everything is important with respect to humans. And our upper limit of frequency, as you well know-- if you're a very young human-- ends at about 20 kilohertz. And most of us, who are in our middle or older ages, aren't hearing anything above 10 kilohertz. So all these frequencies emitted by the bat, and the echoes coming back, are beyond the range of human hearing. And in that sense, they're ultrasonic. So you can't go out and say, oh. I heard a bat, or at least in terms of the echolocating signals that the bats are emitting. There are some sounds that bats emit that are communication sounds. And those are in the human frequency range to a certain extent. But almost all the echolocating signals are well above the upper limit of our hearing range. Now, another thing that you can see from this spectrogram is that there's a big delay between this pulse and this echo, OK? The target is pretty far away. Generally, bats head toward targets. And this has been shown many times in behavioral experiments, especially if they're hungry. OK? And as the bat gets closer and closer to the target, obviously the time between the pulse and the returning echo gets shorter. And, as you can probably see from this spectrogram, there are a lot more pulses emitted per time when the bat gets close to the target because the bat is interested in getting a lot of information when it gets close. Another reason that the bat doesn't emit very many pulses when it's far away from the target is, if you emitted a pulse before the echo returned, you could get confused between the outgoing pulse and the returning echo. So typically, bats tend to increase their pulse rate a lot more as they get closer to their target. OK? And I'm going to show you a demonstration of that. I'm sure we'll convince you of that. So this is the type of bat we have here in New England. Examples of this are a little brown bat or the big brown bat. And if anybody has seen bats-- have you guys seen bats flying around at night? Yeah? Where do you see them? In your bedroom? Underneath-- [LAUGHTER] it's where I've seen some recently, which is a little scary. Well, this view graph says it hunts in open air. Where has anybody seen them? I sometimes see them if I'm out canoeing on a lake at night, or in the evening. Any other places? On golf courses, for example. And those are all sensible, if you will, places for the bat to hunt because they're very open situations. And there isn't a whole bunch of clutter, if you will, that will return echoes to the bat. If there's a moth or a mosquito out there-- it might be the only thing out there above the surface of the lake, and that's very interesting to the bat because it has one target. It doesn't have a million leaves of the forest, if you will, to get confused. It gets one echo, it knows there's one target out there. It goes and swoops out there. And it possibly eats the target, if it's an insect. OK. Let me give you some other examples of some information on FM bats. First, I want to point your attention to who discovered echolocating bats. And I have his book here. This is Donald Griffin's book. It's called Listening In the Dark. And I'll write his name on the board. And I'd like to do just a short reading from his book. Then I'll pass it around. So this is the section where he talks about this discovery of bats' ultrasonic sounds. So he writes, "during my undergraduate years at Harvard College, when I was actively engaged in banding bats to study their migration--" So bats here, about this time of year, start flying south like birds do. He says, "I was familiar only with the generally held view that bats felt with their wings the proximity of obstacles." that was how people thought they navigated around, by touch. "Several friends suggested that I experiment with the ability of my bats to avoid obstacles." Blah, blah, blah. "I decided that I should contact a professor in the Harvard physics department. That was Professor GW Pierce, inventor of the Pierce circuit for the stabilization of radio frequency oscillator. Pierce had developed almost the only apparatus then in existence that could detect and generate a wide range of sounds lying above the audio range. That is from 20,000 to almost 100,000 cycles per second. With some trepidation, I approached Professor Pierce in the winter of 1938 with the suggestion that we use his apparatus to listen to my bats. I found him eager to try the experiment, particularly since he was already engaged in extensive studies of the high frequency sounds of insects. When I first brought a cage full of bats, myotis lucifugus--" OK, that's the little brown bat-- "to Pierce's lab and held the cage in front of the parabolic horn, we were surprised and delighted to hear a medley of raucous noises from the loudspeaker." So Griffin, as an undergraduate, discovered that bats emitted ultrasonic stimuli, OK? And he went on to pursue a lifetime of research on bats. While he was still an undergraduate, he designed some experiments to see if the bats could use this echolocation to avoid objects. And so he took a room, turned out all the lights so the bats could only use other senses. And he noticed that they would fly around. And he didn't have a very extensive equipment budget, so for objects, he went to the store that sold piano wire. And he strung piano wire from the ceiling to the floor of the room. And he let the bats fly around. And he knew that if something touched the piano wire, he could hear a little sound of the wire vibrating. And he had to go down in diameter, to piano wire that was the thickness of the width of a human hair, before the bats finally started touching the wire. They could detect objects even sub-millimeter in size. So their sense of echolocation was really well developed, and very good. It could detect very tiny targets. So that was one of his first, and foremost, experiments. Now, I have some demonstrations. And some of them are from Griffin's original work. And so, in these demonstrations, you have a movie of the bat flying. And the target, this time, is a small food item that's thrown up. I think it's a mealworm. So the investigator throws up the mealworm. And the bat catches it. And on the audio part of the track, you'll hear some popping. And I think that's a stroboscope that's illuminating the image. You will also hear a little chirp. And they're pretty high frequency. But that is the bat echolocating pulse going out. It's detected by microphone, and transformed from the high frequencies down into lower frequencies in your audio range, so you can hear it. OK? And notice that, number one, when the bat gets close to the target, the chirps increase in frequency. And number two, when the bat eats the target, the chirps stop, right? Because unlike you or me, they can't talk and eat at the same time. So those chirps are the bat echolocating pulse. Here's the bat. Here's the target coming up. In that case, as sometimes happens, the bat missed the target. So it's falling away. Here's the target coming up. In this case, the bat caught it in the tip of its wing. And it brings the wing in toward its mouth. And it eats the target. And it starts pulsing again after it's swallowed the target. And there, I think it caught it right in its mouth, without having to use its wings. OK. So these are some other films that I won't go through. But they were some experiments that Griffin did, testing the ability of bats-- and certain species-- to actually catch fish. And he was mystified about this because-- as we've talked about-- sound and air, when it comes to a fluid boundary, mostly reflects off. So it seemed unimaginable that the bat echolocating pulse could go under the water and would be reflected of the fish. So what he figured out later is that, when there is a smooth surface of the water, the bats did not seem interested. But when the fish came up and rippled the surface of the water, that was what the bats we're actually detecting, the little ripples on the surface of the water-- just as you see them visually. There's some videos of bats catching fish here. Bats hanging out. Bats flying in rooms. Sorry about that. Now this last part of the demo is from more modern experiments in which the bat approaches a target. In this case, the target is somewhere here. And it's tethered. The target is fixed. This is a spectrogram of the bat echolocating pulse. And this is the bat flying, in slow motion, to catch the target. And you can see the tremendous increase in pulse repetition rate. As it eats the target, it stops vocalizing. And now it starts again. And this'll be repeated at least once. So here's another run. Here's the bat spectrogram down here. And here's the bat coming in. Here is the target, right there. I guess I'd better stop that. OK. And so that second demo is from Doctor Cynthia Moss, who used to be at Harvard. And now she's at the University of Maryland. And she does extensive work on bat echolocation. And so I think it clearly shows the increase in pulse repetition rate as the bat gets close to the target. Any questions on that? OK. So those were all the first kind of bat I talked about, the FM bat. And now, let's get into the second group of bats. And this group is called CFFM bats. OK. And these are completely different species of bats. They are new world CFFM bats and old world CFFM bats. It probably has evolved several times. The echolocating pulse and echo are completely different, compared to the FM bat. So in the case of the mustache bat, which is an example of CFFM bat, this is the spectrogram. Again, frequency on the y-axis and time on the x-axis. But in this case, instead of the pulse sweeping downward, or a frequency modulated very downward-- almost like a chirp-- instead, the pulse is a constant frequency. So the CF stands for constant frequency. That just means the frequency is staying constant as a function of time. And that's this flat section here. In the case of bat sounds, just like human speech sounds, there are many harmonics. There's the first harmonic. That's called CF1. There's the second harmonic, an octave above, CF2. There's a third harmonic, CF3. And there's a fourth harmonic, CF4. And it's conventional on this kind of spectrogram display to illustrate the sounds that have the most energy with the boldest marking here. So CF2 is the one that has the highest sound pressure level. And so that's the darkest here. These other ones, especially CF1 and CF4, are lower in level. They're not as intense. And so they're not as black on this display. And in this display of the FM bat, the pulse is very intense. And the echo is, of course, much reduced. Echolocating pulses can be 110 dB, if you measure them right at the bat's mouth. They can be very intense. And, of course, the bat contracts its middle ear muscles to prevent those kinds of intense stimuli from damaging its own ears. And then it relaxes the muscles when the echo comes by, and its hearing is fine. OK. So at the end of the CF portion, this echo-locating pulse. There is a little FM sweep. OK. So you can appreciate, maybe, that there's an FM1, a little FM2 sweep, a little FM3 sweep, and a little Fm4 sweep. And it's thought that this bat uses the FM of the pulse, and the FM of the return echo, to get a measure of the distance that the target is from the bat. OK. It if comes back in 10 milliseconds, and you know the sound of velocity is 340 meters per second, you can figure out how close that is. And this bat can do that, as well. Now, what's going on with this CF part? Well, as you can see on the spectrogram, the CF of the pulse is not exactly the same as the CF of the echo. OK. The echo has shifted up to be a little higher in frequency in each case. Now, how can frequencies be shifted? Well, it has to do with the Doppler shift. OK. So this is a shift in, in this case, sound frequency. But you can have a Doppler shift for any kind of wave. For example, you can have a Doppler shift for light waves. If you've studied the Big Bang theory of the origin of the universe, there's a big explosion, right? Everything exploded out and is moving far away from us. So you look at the light coming from a star that's moving away from you. It's actually shifted a little bit to longer wavelengths, toward the more reddish hues, because it's moving away from you. So Doppler shifts have to do with wave sources that are moving relative to the receiver, or the receiver moving relative to the emitter. Another example of a Doppler shift, this time for sound, would be if you were in the grandstand of a race track, and the race cars were going around a big oval. OK. And you hear the sound of their engines. As the race car comes toward you, along the straight away, it sounds like it's higher in pitch because it's moving toward you. As it passes you and then starts to move away from you, it sounds like it's lower in pitch. So the thing you'd hear would be [BUZZING] as each race car went by you. So, as the race car is here, and you're the observer listening here, it emits-- let's say-- a pulse of sound. OK. This might be the peak of the wave front, if it were just a sinusoid, let's say. Now, by the time a race car coming toward you has emitted the next peak, the race car's actually moved. OK. So the peak is here. And the next peak is emitted. And it's very close together. If the race car is moving away from you, it emits one peak of sound. And then by the time it emits the next peak of sound, it's moved a little away from you. The peaks are farther apart. And we know the sound source that has a quick oscillation sounds like a high frequency. And the sound source that has a very slow oscillation sounds like a low frequency. So Doppler shifts are positive, higher frequencies for objects making sound moving toward you. And Doppler shifts are low, or negative, in frequency. They make lower frequencies if the object is moving away from you. It's just the physical characteristics of sound coupled with movement. OK. In the case of these positive Doppler shifted echoes, we know either that the object that has been reflecting the echo is moving toward the bat or, conversely, that the bat has been flying toward the object that is emitting the echo. So a positive Doppler shift means things are getting closer together. And a negative Doppler shift would be things are going farther apart. So not only does the bat, from its FM sweep, get an indication of how far it is away from the target. But by its Doppler shifted CF part, it gets an idea of the relative motion of the target. Why is that important? These types of bats, instead of hunting in open air-- like the FM bats we have-- these are tropical bats. And they hunt in dense vegetation, like the tropical rain forests. And there are millions of objects around. There are leaves. There's vines. There's lots of clutter here. What the bat is interested in is not stationary clutter. Presumably, things that are Doppler shifted all the same. But something that is moving in all this clutter. It's very interested in moving objects because that's a life form, perhaps. And just imagine the kind of a Doppler shift that would be made by a moth, first beating its wing toward you, if you were the bat. And then beating its wing downward, and away, from you. OK. That's a very complicated positive and negative Doppler shift that the bat would pick up on its return echo. And that would be a very interesting target to the bat. Yeah. AUDIENCE: And how do they tell that the difference is when they're moving versus if the object is [INAUDIBLE]? PROFESSOR: They don't care. All they care about is that they might be getting thousands of Doppler shifted echoes from the targets in front of them. Let's say they're flying toward a whole bunch of tropical rain forest vegetation. There are going to be thousands of positive Doppler shifted echoes. And then something is moving away from them, or toward them, faster than the background. All they care is that. It's Doppler shifted different relative to the background. They're just looking for something special. That is, something that's moving relative to the background. Any other questions on that? So you can, of course, design sonar systems. Submarine sonar systems work by sending out a pulse of sound, and listening for the echo. And most of the kinds of sonar systems that we have send out a ping, which is a frequency swept signal, and listen for the echo. Because it's mostly interested in the distance from a target, and whether there's a target out there. This is a very unusual type of echolocation, or sonar, if you will. Now why am I bringing this up? Well, it's very interesting because the bat gets two queues instead of just one. Also because a lot of work on bat cortex has used CFFM bats. And one of the most popular has been the so-called mustache bat, I believe because it has a noseleaf that's between its upper lip-- it looks like a mustache-- and its nostrils. So it's called the mustache bat. And a lot of this work has been done by a researcher, who is still active, at Washington University in Saint Louis. And his name is Nobua Suga. And he was the first, really, to work successfully on the bat auditory cortex. A lot of his work comes from the 1970s, '80s, and '90s. And before I explain this bottom part of this slide, let me just go on and show you the kinds of experiments that Suga did. And here's one from one of his publications in the 1980s. So Suga's work was innovative because he played around-- well, first he rationalized, OK. I could try any sound system I'd like to. I could try clicks. I could try pure tones. I could try noise. I could try speech. I could try-- but why don't I try what the bat listens to over and over? It listens to a pulse. And then a little bit later, an echo. And this turned out to be a very, very wise choice, as we'll see in a minute. Secondly, about the time of the 1970s and '80s, speech researchers in human speech were using synthesized speech. Of course, we all know what synthesized speech is now. But back then, it was very novel. And Suga says, well, I'm going to use synthesized bat echolocating calls. And here is an example of a synthesized pulse. So this is for the mustache bat. And this looks like CF1-FM1, CF2-FM2, and CF3-FM3. So he's just using three harmonics. So one thing about synthesized calls is you can do things like dispense with one harmonic, if you want to, easily take it out, and put it an echo. So here's a pulse. And here's an echo. It's a little bit Doppler shifted. You can look at the no response to the pulse, and to the echo. And this lower trace is a histogram from a single neuron in the auditory cortex of the echolocating bat to a pulse, and to an echo. There's not much response. Suga found that when you play a pulse and, a short time later, an echo, you get a huge response. And that's what's indicated here. So this is the pulse-echo combination. And these are synthesized stimuli. And you can read in the original paper, it looks like it's not given here, exactly what the delay is between the pulse and the echo. But Suga tried various pulse-echo delays. And he found that cortical neurons, in many cases, were very sensitive to the exact delay between the pulse and the echo. So they were, if you will, delay tuned. And that's indicated here, in the first bullet. "The neuron pictured above responds little to--" blah, blah, blah. "--pulse of an echo alone. But vigorously to a pulse followed by an echo at certain delay. In this case, 9.3 milliseconds is the best delay for this neuron." So this is a delay tuned neuron. Once Suga did recordings from different parts of the bat cortex, he found that the best delay was actually mapped along the surface of the cortex. So here is some of his work from the 1990s, showing you maps of best delays, and other properties, in the bat auditory cortex. So here is a side view of the bat cortex. This is looking at the left side. This is the front, where the olfactory areas are. Way in the back would be the occipital cortex. And our old friend, the auditory cortex, is in the temporal region, on the side of the brain, just like it was in the cat. And like it is in the human. And here is A1. And this rectangle here is expanded here. And some of Suga's maps are shown. This part right here-- the biggest part-- is cortical field A1, which, as we've seen in other animals, is the tonotopically organized field. And it's tonotopically organized in this bat. And these numbers and lines are the ISO frequency laminae. So remember, the experiment here is to go in and sample at a specific place in the cortex, and find the characteristic frequency for neurons in that column. And then move the electrode a little bit. Do the same. Get the tuning curve. Get the CF for those neurons. And so on and so forth. And build up a map here. So just like in the cat, posterior areas are tuned to low CF. Low CFs in the bat are 20 kilohertz. We're dealing with very, very high frequencies. As you go more rostrally, the CFs get increasingly high. And at the very rostral end of A1, the CF is 100 kilohertz. Extremely high. And everything looks exactly like other mammals, except for this huge area right in the middle of A1. And almost all of the neurons here are tuned to between 61 and 66 kilohertz. And you should perk up your ears a little bit because that is where the most intense harmonic of echolocating pulse is, right around 61 kilohertz. And a lot of the Doppler shifted echoes are just going to be a little bit above that. If the bat is flying toward the target, this region goes up then to 66 kilohertz. So then there is an expanded region of the A1 of the bat that's tuned to a very important frequency for the echolocating signal. And, at first, this was called an acoustic fovea. Because if you go down into lower nuclei of the bat pathway, and if you actually go into the cochlea, you find an expanded region of the cochlea devoted to these same frequencies. That is, you go along the basilar membrane, starting at the most apical regions, and you march down them. When you get to the 61 kilohertz place, there's a lot of cochlea devoted to processing that area. And so the fovea, the eyes, where you have lots of receptor cells packed in to a certain part. And this is where you have a lot of hair cells packed in, or expanded region of the basilar membrane, where lots of hair cells processing this small range of frequencies. So this is very much different from other mammals. And the cochlea of this CFFM bat is also very different. Now, we were talking about delay to neurons. And Suga found a very interesting area near A1 in which there is a mapping for best delay. And that's indicated here. And the best delays are marching from short to longer best delays, as these arrows go along here. Now they're marked FM1, FM2, FM3. So what does all that mean? So Suga found that with his synthesized pulses and echoes, he could dissect this rather complicated pulse-echo constellation into smaller parts. And here's an example of a stimulus where you have, it looks like, CF2-FM2, CF3-FM3. And you have the echo for CF1-FM1, and the echo for CF3-FM3. And that hardly gave any response to the neurons. Here's an example where you have only CF1-FM1 for the pulse, and only CF2-FM2 for the echo. And it gave a big response. And you could do even more. You can strip off everything except the FM1 and the echo FM2. And you get a big response from the neuron. So this type of neuron, then, would be called an FM1-FM2 best delay neuron. FM1 is the pulse. FM2 is the echo. And with a certain delay between those two, you get as big a response as with the whole constellation of pulse and echo. And those neurons were located in this specific region, called FM1-FM2. And their delays were mapped along this axis, with delays going from 0.4 to 18 milliseconds. And knowing the velocity of sound, you can convert that to a target range of between 7 and 310 centimeters. That's how far the target was from the bat at that specific best delay. Suga also found some other best delay regions. For example, FM2, FM3, so on and so forth in other adjacent parts of cortex. And this is really beautiful work, showing specializations for cortical neuron response. And for showing mappings for those specializations in the bat cortex. We don't really have any data anywhere near as beautiful on non-echolocating mammalian cortex that show specialization for specific features of sound stimuli as we do like this in the bat cortex. And this is really beautiful work. This is Nobel Prize deserving work because it really shows us what this bat cortex is doing. It's responding to specific features of the pulse. And the return echo delay a specific delay later. So it's very beautiful work. Let me just mention one other region here. Suga showed nearby, a region where the neurons are specialized to certain combinations of the constant frequency of the echo. And then he showed a Doppler shifted constant [INAUDIBLE] pulse and CF of the echo. So this is the CFCF region right over here. Yeah, question. AUDIENCE: In the last diagram-- PROFESSOR: This one or the one before? AUDIENCE: This one right here. PROFESSOR: OK. AUDIENCE: The last graph, like the middle bottom. PROFESSOR: This guy? AUDIENCE: Why is that the neuron is responding, like, before the onset frequency? PROFESSOR: I don't know why that is. There's some clue to that, as to why this stimulus starts here. I don't know. I don't know the answer to that. Let's see if it says anything in the caption. I don't know. I don't know why that is. Sorry. I'll have to dig out the paper and figure that out. Any other questions? So one thing that's gone on after Sugo's early work on these specializations has asked the question, is this really happening in the auditory cortex? Or is the cortex just merely a reflection of some beautiful processing at a lower level of the pathway? And to a certain extent, best delay sensitivity is found at lower levels. For example, the inferior colliculus has some best delay tuned neurons in the echolocating bat. So it probably has more in the auditory cortex. But they can arise at the inferior colliculus. OK. So that's what I wanted to say about bat echolocation. And now I'm going to move on and spend the last part of today's class talking about speech sounds. So we had this particular slide in an earlier lecture, I think the very first lecture that I gave, talking about what speech sounds are. So speech sounds, obviously, are formed in humans by the vocal cords, or vocal folds, closing and opening during airflow from the lungs to the upper vocal tract. And this closing and opening of the glottis gives rise to the so-called glottal pulses. When the vocal cords are closed, there's no airflow coming out. But when they open, there's turbulent airflow and it makes a sound. So these are pulses. And they have a whole bunch of different frequencies. So this is the wave form as a function of time. Sound pressure is a function of time. And this is the spectrum showing the different frequencies that are formed. There's a whole bunch of different frequencies in your glottal pulses. It's very complicated. To form different speech sounds, you do things with your upper vocal tract. In this case of vowels, you position the muscles so that your upper vocal tract forms filters that enhance and decrease some of these frequencies. And after you apply the filter function of the vocal tract to this glottal pulse spectrum, you get this type of spectrum where there are certain peaks. And in the production of a vowel, these peaks are at different frequencies. So, for this example, the vowel "eh" in hit, you have a very low peak, and a couple of high peaks up here. These pacer called formants. And they're labeled by Fs. So we went over this before. So there's F1 here, F2, and F3 here. And certain cochlear implant processors, of course. Try to look at the acoustic spectrum, and pick off these formants. And they present a lot of electrical stimuli to electrodes that correspond to them in the cochlea. This one we have a lot of stimulation at these low frequency electrodes, which would be apical in the cochlear implant. And then in the intermediate electrodes, they completely shut them down, even if there's a little bit of background noise. And then they would present a lot of stimuli at the position corresponding to F2 and F3. And that's an effort to decrease background noise, which is always a big problem in listening to any kind of acoustic wave form. But especially if you have a cochlear implant. So this vowel, "ah," in the word "call" has two formants very low in frequency. And one in the middle frequency. It sounds very different, of course. And your vocal tract position is very different. And this volume, which is "oo," as in the word cool, has three fairly evenly spaced formants here. Your vocal tract is yet in a different position. And you interpret this as yet a different vowel. Now that's a display that's not very conventional. Much more conventional is to look at a speech spectrogram. So this is very similar to what we've just looked at for bat echolocating pulses. This spectrogram is a graph on the y-axis of frequencies. And now these are more normal sonic, if you will, frequencies. These are well within the human range, of course, going from 0 to 7 kilohertz on this axis. This is a time axis here. And again, the higher in level, the darker the display in the spectrogram. So there's some really dark bands here. And there's some very light stuff here, and here, and here. And this is the utterance-- "Joe took father's shoe bench out." OK, and that's what the sound looks like when you make that utterance. So the spectrogram plots the frequencies of speech sounds over time. What we've talked about-- up until now-- are voiced segments, which are mostly vowels. So in this utterance, you have a bunch of vowels here. Here's a nice one, "ah" in the word father's. And you can quite clearly see there's a nice band here that would be F1. That would be about at 500 kilohertz. And F2 is about at 1 kilohertz. F3 would be about at 2 kilohertz. And there's a fourth formant about 3 kilohertz. And that's the very beautiful vowel, "ah" as in father. Here's another one. "Joe." So "oh." "Oh" is a vowel where, in this case, there's a beautiful stable formant here. But here is a formant that's transitioning from higher frequency, maybe about 1.5 kilohertz, down to below 1 kilohertz. So it's Joe. And there's a higher formant here. OK. So those are the vowels, or the so-called voiced segments. And voicing just means that there's a constant outflow of sound coming through the vocal tract. And you can make these sounds forever. You can say, ahhhhhhh. And you can just keep going if you want to. Of course, you don't in normal speech. Now for consonants, there are several types. And these are generally called unvoiced segments. Mostly consonants are intervals containing bands of frequencies, swept frequencies, and silent intervals. So for example, one of the consonants here is F in the word fathers. And right before, at the beginning of the sound F, you go, which is no sound, right? You close your lips. You keep sound from coming out. And you finally go, father, right? And that's what's happening right here. So that's a stop consonant. You stop the vocal tract before you let it go. And the vowel T, as in took, is another stop consonant. So you're not doing anything at the beginning. And finally, you go, took. Right? When you let go, and emit the sound of took, you have a very complex frequency band. That's generally high frequencies. 2 kilohertz in this case, up beyond 7 kilohertz. There's that explosion of sound right at the beginning of the consonant T, as and took. OK. Now there been a lot of course studies on speech coding in the auditory nerve and the cochlear nucleus. And one of the findings shouldn't be too surprising at all. You have auditory nerve fibers that have tuning curves, right? We've been over tuning curves many times before. So tuning curve is a graph of sound pressure level for a response. There's a sound frequency here. And then a 1 kilohertz CF. The CF for this tuning curve would be 1 kilohertz, let's say. And a 10 kilohertz CF might look like that. OK. So you can explore the responses of 1 kilohertz auditory nerve fibers and 10 kilohertz auditory nerve fibers to this type of stimulus. And obviously, the 1 kilohertz fibers are going to be very active. During portions of this utterance, for example, there's a lot of 1 kilohertz in this "oh" second formant here. So the 1 kilohertz fiber's going to respond like crazy there. In the vowel "ah," there's a big 1 kilohertz band there. The 1 kilohertz fiber is going to respond a lot there. It's not going to respond here or here. But it's going to respond a lot at the end of the "out" sound. OK. The 10 kilohertz fiber is kind of out of luck, right? Its way up here. It's off the axis. But notice the tail of this 10 kilohertz fiber. If I had drawn it a little bit further it would be extending past 1 kilohertz. So it's certainly going to respond, right, in here, as long as the sound level is high enough. If you keep the sound level of this utterance low, down here, then the frequency is obviously down here. That 10 kilohertz fiber is not going to respond. But if you boost the sound level, such that you're in the tail of the tuning curve, this 10 kilohertz CF fiber is going to start to respond. For example, response to these frequencies here, this 4, 5, and 6 kilohertz. Maybe the 7 kilohertz can respond to things like the consonants, if the sound level is high enough so that it's within its response areas. So there's clear CF processing of this type of speech signal at the auditory nerve and in the cochlear nucleus. There's also phase locking. For example, these lower frequencies are within the frequency range where there's really good phase locking for the auditory nerve. Remember, phase locking falls off above 1 kilohertz. And by about 3 kilohertz, there's not much phase locking at all. But many of these voiced, or vowel, segments are going to have low frequencies. And they're going to be good phase locking in the auditory nerve or cochlear nucleus. So that's just sort of a review. Think about the auditory nerve response to these speech signals because clearly, the auditory nerve is going to respond very nicely, in terms of what its CFs tell it to. Now, there's been a lot of interesting work. Of course, I don't have time to get into much speech processing and language representation. But I just wanted to show you some things that relate quite nicely to what we've just gone through for echolocation. Here are some synthesized speech stimuli. OK. And you can do this very nicely on your computer. This is a spectrogram of the synthesized sound. Frequency is on the y-axis and time, in this case in milliseconds, the very quick stimulus is on the x-axis. And there are several harmonics, very much like we had for the CF echolocating bat sound. And there's some regions of constant frequency. And there are clearly some regions of frequency modulation, very much like the bat echolocating signal that we just went over. Except that now the modulated part of the signal is in the front instead of at the back like it was for the bat. And so right here, on the third formant, is a very interesting transition that's not shown in black. It's shown in white. Because in the work of Liberman and Mattingly from the 1980s, they studied this so-called formant transition. So the format here is the vowel "ah." And you have these three formants, 1, 2, and 3. And the transition leading up into that is either the consonant D or the consonant G. It's when it's coming down. It's the combination "da." but when this third formant transition is instead going up, it's the consonant "gah." completely different speech sound, "da" versus "gah." No one would ever mistake them. Right. And so what Liberman and Mattingly did was they varied this transition into the third formant. Instead of just having one like that, or one like this, they sloped it any number of degrees. All right. And when it goes, I think when it's falling, it's "da." And when it's rising, it's "gah" if I'm not mistaken. So what would you expect if it was right in the middle? Well, you could expect anything. But actually, the observation is, as you move this formant transition over, subjects do not report something that's in between "gah" and "da." Instead, all of a sudden, they quickly shift from "gah" to "da." All of a sudden. And there's a very sharp boundary in the shift. And the subjects never report something that's intermediate. So this is an example of putting the speech sound, which can be modulating continuously, into two sharply defined perceptual categories. Either "gah" or either "da." But nothing in between. No gradual slope in between. It's just one or the other. This gave rise to the idea of categorical perception of speech sounds. The other thing they could do is do things like this. Present the black stimuli to one ear and the white stimuli to the other ear. And you get the perception, then, of a speech sound. If you don't present any formant transition at all, what would you expect to happen? Well, what actually happens is you do hear something ambiguous if there's actually no formant transition at all. If there's a formant transition, in one ear and the rest of the sound to the other ear, you hear the complete speech sound. If you just present this formant transition and nothing else, you just hear a little chirp, a little speech sound. But you add that to the rest, and you get an unambiguous or categorical "da" or "gah." OK. These are beautiful series of experiments by Alvin Liberman in the 1980s. Now, in cortex, the interesting question, then, if you pull an analogy between bat signals and human signals, we've had spectrograms from the two which are not that much different. The question is, do we have specialized neurons in our cortices that are sensitive to specific features of those signals? For example, features are things like whether these two formants are close together, whether there's a formant sweep going down or going up. Do we have specific, if you will, feature detectors in the human cortex? We don't know that. What we do know clearly is that there are areas that are very selective for language and speech stimuli in the cortex of humans. So in the cortex of humans, we've talked about there being a primary auditory cortex in the temporal lobe here. And we had the little model that showed you that there was a Heschl's gyrus. And that is the site of primary auditory cortex, or A1, in humans. All around that region, an area that's sometimes called perisylvian cortex. And it gets its name from this big sylvian fissure, if you will, that divides the temporal lobe down here from the rest of the brain, especially the parietal lobe. All around this perisylvian cortex is associated with language processing. And how do we know that? Well, of course, imaging studies. But in the beginning, the early pathologists like Broca and Wernicke, who studied patients who had lesions in the cortex, mostly from strokes. But sometimes from other injuries. It showed that lesions in this region of the brain left patients with deficits in language processing, especially with a deficit called aphasia. OK? Disorders of comprehending or producing spoken language are known as aphasia. And these aphasias are often classified into types. If you're a neurology resident, or you do your medical rotation in neurology, you will see patients with so-called Broca's aphasia. And this, originally, was brought to light by Broca. We saw such patients with lesions in this part of the brain. That's come to be known as Broca's area. That's part of the frontal cortex, the lower frontal cortex, near motor areas. And the clinical manifestation is a major disturbance in speech production, with sparse or halting speech. It's often misarticulated, missing function words, and parts of words. So this is clearly a problem with producing speech. Sometimes this is called motor aphasia. Wernicke is another early physician who saw patients with damaged cortices. He saw some of them with damage to this region, in the caudal temporal lobe and associated parietal lobe. It's an area which has become known as Wernicke's area. And here, the clinical manifestation is completely different. In this case, the production of speech is fine. But it's a major disturbance in auditory comprehension. OK. So you ask the patient something and they cannot understand you. But they have fluent speech production. It's fluent speech with maybe disturbances of the sounds and structures of the words. But the major deficit is in the auditory comprehension, language comprehension. So the question, then, has always been, is this Broca's area the motor area for production of speech and language? And is this Wernicke's area the area for comprehension of speech? And clearly, this is a very simplistic idea, in what would be called the localizational idea. I'm not sure if that word is here. But if you're a so-called localization proponent, you would say each little part of the cortex has its own function. And they do that independently of all the other areas. So the Broca's area is involved in producing speech. And Wernicke's area is responsible for comprehending speech. And this is clearly from imaging studies that we know now is a very simplistic view, and probably an incorrect view. It's more likely this whole of perisylvian cortex contributes to language processing. And so people who subscribe to that theory would be called holistic. They'd have the holistic view of processing in cortex. And we'll go over the imaging studies in just a minute. One thing I want to make sure to say is that language processing is clearly a cortical phenomenon. That is, if you have injury to the cortex in these specific areas, you're likely to have aphasia. If you have injuries to the brain stem, to the thalamus, you are much less likely to have any kind of aphasia. So clearly, the cortex is the place where language is processed. Another thing about cortex and language processing is that it's usually lateralized into one hemisphere or another, OK? So if you're right handed, usually your language is processed in your opposite hemifield, in the left cortical area. And so how is that known? Well, if you have a stroke patient whose right handed, they have a lesion in the left cortex. They show up with aphasia. If they have a lesion in the right cortex, there's minimal effect on their language functions. Another way is by the so-called Wada test. So people who are getting ready to have cortical neurosurgery-- so why would you ever want to have cortical neurosurgery? Anybody? So another big disease in the cortex is epilepsy, right? Epilepsy is uncontrolled activity, usually starting in the cortex. Of course, the first line of attack is by medication. But some epileptic patients have epilepsy that is not controlled by medication. And they have seizures every half hour. And it's pretty much intractable. So the last line of attack, then, by the neurosurgeons is to try to go into the cortex and find the part of the cortex where the epileptic focus begins. And then they'd lesion that. And this is a successful treatment. But if the surgeon goes in and lesions part of the language areas, you have a patient that wakes up as an aphasic. That's not a happy patient. So the surgeons do lots of tests before such surgery. And one is to try to figure out which hemisphere is processing the language. So they do the Wada test. Has anybody heard of the Wada test? OK. They take the patient, of course, and-- if they're smart and they plan ahead, the patient is seated. OK? And they have a carotid artery on the left side, and a carotid artery on the right side. And into the carotid artery is injected a quick acting barbiturate anesthetic. On one side, that carotid artery feeds one hemisphere of the cortex and not the other. So the patient is seated because the patient is likely to slump because they're going to have some motor problems. And the patient might fall over if they were standing up. But in the test, the patient is supposed to recite from reading or from memory. And the anesthetic is injected. And as soon as that anesthetic hits the hemisphere that's processing language, the patient stops reciting. That's if they injected on the correct side. If they injected on the other side, the patient keeps reading, keeps reciting. So that's the Wada test. Language function is mostly in one hemisphere in right handed individuals In left handers, things are a little bit different. Left handed individuals sometimes have the opposite hemispheric dominance. Sometimes they have language function distributed bilaterally. Sometimes they have language function in the same side as their handedness. OK. But for this, I'm talking about right handers, OK? Here's some very interesting work looking at the language areas in people. In postmortem, material has been done by Al Galaburda. So he's at Beth Israel Deaconess Hospital now. And he looked at the left hemisphere and the right hemispheres. And he looked at the very, of course, closely associated language areas, especially right near A1. And so here is-- you can't see them on a side view like this-- but if you cut off the top of the cortex, which you can do. This is postmortem material. You look down on the superior surface of the temporal lobe, you see these views here. And the area that's just caudal to the primary auditory cortex is called the plenum temporally. And there are clearly some left-right asymmetries in that region of the brain that almost certainly relate to language processing in that area. OK? So there are anatomical asymmetries in the perisylvian regions. And so this is the first time in our course, now, where right and left makes a big difference. All along, we said it doesn't matter if we simulated the right ear or the left ear. And here, clearly, there is a dominant hemisphere for language. Now finally, imaging studies have shown us a great deal about the cortical processing of language. And here's data from a pet study, in which the subjects are listening to language stimuli. And these happen to be French speaking subjects. So the last condition is a story in French. And obviously, the subjects understood the story. They could tell you what was going on. And these are, by the way, right handed subjects. And here's the imaging of the areas in the left hemisphere, which would be expected to be the dominant hemisphere for language. And this is in the other hemisphere, which shows a lot less activation. The activation in the areas where the subjects were listening to language they understood is the superior temporal area, superior temporal gyrus. That's including the temporal pole here, in purple. You can't see the yellow very well. But believe there's a lot of activation here. This blue area, labeled IFG inferior frontal gyrus, this is Broca's area. And why does it light up if Broca's area is only a motor area? It's clearly involved in motor functions. But here is imaging results from subjects just listening, not producing language, where Broca's area lights up. It's activated on the dominant side, just in the listening task. Contrast that to when the subjects were listening to a story in a language that they did not understand, this language is called Tamil. And none of the subjects could speak it. There's hardly any activation in the original. There's a little bit of yellow activation near the primary auditory cortex. It's pretty symmetric, left to right. And that's just what you'd expect if you were, for example, given pure tones or noise. This is a nice control because, presumably, this language has about the same frequency content. And other factors are fairly similar between these two languages. The one difference is the subjects were not perceptually aware of what they were learning about in the story, in the case of the unfamiliar. These intermediate conditions, some of them having pseudo words and anomalous sentences, didn't light up the language areas to a great degree. But listening to a list, in this case, of French words-- which the subjects were familiar to-- again, showed activation. I'll point out to you in Broca's area in the dominant hemisphere, in these right handed subjects. So again, clearly, just a listening task can light up Broca's area. And so that is a very clear example. I'll show you that these so-called motor areas, like Broca's area, are involved in listening and cortex processing of language stimuli. It's not just involved in motor production of speech, even though what we call clinically Broca's aphasia has a major disturbance in speech production. OK. And one final thing to leave you with. Broca's area tends to light up, in this case, in fairly simple stimuli. But it tends to light up in other studies, like in cases where the substances have difficult grammar or complex meaning. And so the subjects, you can imagine, are really listening hard and trying to figure out the meaning of a sentence that has a complicated grammar. I think we've all written such senses. We've all tried to read them from other writers. And it takes a lot of brain power, then, to decode that, and figure out the meaning. And maybe that's what happens here. Broca's area's called in when the task gets more difficult than just a simple list of words. In this case, it lit up. And in other studies, it's clearly showing more activation when the task gets more difficult. OK. So we're out of time. I can take a question or two. And just a reminder, class meets at Mass. Eye and Ear on Wednesday for the lab tour. So I'll see you over there.
MIT_904_Sensory_Systems_Fall_2013
11_The_neural_control_of_visually_guided_eye_movements_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: All right good afternoon everyone. What I have on the board here is what we're going to cover today and what we had covered the last time. Those in blue here are the ones we have covered the last time and the rest of them, five through nine, are the ones we are going to cover today. Actually the last time I didn't have this last step here, dreaming and rapid eye-movement sleep, if there is time I want to say a few words about that because that's a very interesting topic and brings us to make a few comments about Freudian theory. All right then so we are going to then start with a description of the cortical structures that are involved in eye-movement control. Now the first thing I want to do is to show you once again the monkey brain. You're already familiar with those items which are on here. Here's V1, here's V4, here's the superior temporal sulcus that contains areas MT and MST and of course here's the central sulcus, principalis, and the arcuate. Another structure involved in eye-movement control is the lateral intraparietal sulcus which is in here, and then another one is the medial intraparietal sulcus but those two perform service similar tasks and so I'm not going to talk about the MIP separately from LIP. And then here in the frontal lobe we have the frontal eye fields as the name implies of course, it has a lot to do with eye-movement control and then here very close to the midline we have the medial eye fields which also play a role in eye movement control. So now we have all these structures and this is not by any means totally complete, because these deal mostly with saccadic eye movements and to a lesser degree with pursuit eye movements but the fact is it that all these areas that play a significant role in eye-movement control and of course, those people who want to understand eye-movement control need to figure out what these various structures do for you to enable you to look around with the great ease that you can look around with actually. So anyway let's try a number of views of how to go about finding out the operational characteristics of these areas and furthermore also to find out how they interconnect. Now one approach that has been very useful in delineating the areas in the visual cortex and even in subcortical areas is to use electrical stimulation. Because it has been found as, as we had discussed like the last time that when you electrically stimulate some structures heavily involved in eye movement control of like the colliculus, at low current levels electrical stimulation can elicit a saccade and by looking at the characteristics of that as we had in the colliculus, you can gain further insight about what the roles are of these various areas. And of course people have done electrical stimulation all over creation in the visual cortex as well as in other cortical areas, thereby trying to determine whether or not the electrical stimulation elicits a motor response and of course if you do this in motor cortex then you get a motor response and if you do do it in areas which is specifically connected with eye-movement control you get eye movement. So here is an example of the kinds of things we can do. Here is a monkey brain and of course as we have discussed already, from the brain stem the signal is sent to the eye muscles which provides the so-called rate code and then above that via the superior colliculus and then in the back we have V1 that you're already familiar with V2, LIP, frontal eye fields, and the medial eye fields. So now let's ask the question, what happens when you electrically stimulate these cortical areas? And also to compare that with what happens when you stimulate in the superior colliculus. So we already know that when you stimulate the colliculus what you get is wherever you put the electrodes in the colliculus and you find out where the receptive field is of the cells that you will be stimulating, that when you then convert and stimulate you get a saccade that brings the fovea into the center of the receptive fields of the neurons that you are stimulating and that is laid out in a nice topographic fashion and that is shown here in a schematic fashion. So that no matter where the eye starts, at any given point, if you electrically stimulate you get the same vector saccade in the superior colliculus. So you have what is called the vector code which if you remember, is quite different from the code that you have in the brain stem where you have a rate code. So now the question is suppose we now start stimulating these other cortical areas that I had designated in the previous slide and ask the question what happens in those places? All right so if you stimulate the visual cortex in this case V1 for example, you get the same kind of coding operation, you get vector saccades OK, constant vector saccades. Then if you do the same thing in LIP you also get a constant vector saccade and if you do that in the frontal eye fields you still get the same thing. So all of these areas seem to be coding saccadic vectors but now when you stimulate in the medial eye fields you get a very different kind of effect. Very interesting what you get here is what is called a place code, meaning that when you stimulate in various regions of the medial eye fields, the eye wherever it starts, converges on a particular point which you, we'll call the motor field, OK? And different regions in the medial eye fields have motor fields that generate different locations to which the eye will saccade when you stimulate there. So that's a very different kind of code from all the others. So now the next question is that you're going to pose is, how do the signals from all these cortical areas get down to the brain stem? So what kind of experiment do you think you would want to do to get some easy answers to that? So to, to perhaps highlight that some people thought, that the signals from all of these areas go down to the colliculus and then the colliculus sends its signals down to the brain stem. So if that's the case, what would you, what experiment would you do? Well the experiment you would do is you would remove the superior colliculus OK? And then again stimulate all of these areas. If that hypothesis, that all these areas send their signals through the superior colliculus of the brain stem is correct, then you would no longer get any saccades when you electrically stimulate in the cortex at any of these sites. Got it? All right now, let's see if an experiment like that had been done and yes, yes it has been. So here it is, here we're going to remove the superior colliculus. All right think about it for a minute, think what you would hypothesize will happen? Well what happens is quite dramatic. When you stimulate in V1, V2, and LIP you no longer get a saccade. Somehow the signals to generate the saccade from these areas seem to be going through the colliculus because once the colliculus is not there those signals are ineffective. Now what happens in the frontal lobe? All right what happens there is quite interesting. You still get saccades and you still get the same coding operation. You get a constant vector code when you stimulate the frontal eye fields and you get your place code in the medial eye fields and you get that at the same old threshold. So that discovery then resulted in the hypothesis first of all that these posterior areas send the signals to the brain stem from the colliculus which are called the posterior system and the ones in the anterior portions of the brain from medial and frontal eye fields seem to be able to gain direct access bypassing the colliculus to the brain stem is because they're still effective when you stimulate there and so we can call that the anterior system. All right, now of course these two systems need to talk to each other, which they do, there are plenty of connections there because of course, if you will, the left hand, it's got to know what the right hand is doing. So anyway this is then a very summary arrangement and now we can proceed to ask some questions about just what do these various areas do? And to understand that data we need to look into more detail about the nature of electrical stimulation and compare that with the nature of eye movements made to visual target. So let's look at that next time. So we can look at the effects of paired electrical and visual stimulation. All right so, the first thing we're going to look at is what happens when you stimulate two sites say in the colliculus at the same time? And if you remember, medial is up and lateral is down. So if you stimulate each of those alone you get here in number one you get an upward saccade, you simulate number two you get a downward saccade. Now the question comes up, what happens when you stimulate both at the same time? Well the number of hypotheses that have been proposed and by now it's been well established that what happened is that you get a vector average saccade not vector summation, but vector averaging OK? Now, to prove that that is indeed vector averaging if you take the same experiment but you put one electrode in the anterior portion and the other electrode in the posterior one of the colliculus, this one of course generates a small saccade and this one a large saccade. So if its vector averages, you should get an in-between saccade. If your vector sums then it should get a bigger saccade than either of those and you indeed get a vector average saccade. So depending on this arrangement, or this arrangement, you get the same vector average saccade because the only thing that's happening here is you excite the neurons in the colliculus by virtue of this electrical stimulation. Now think about that for a minute. If this were the case with visual stimuli, suppose two visual stimuli come up, it would be a total disaster if you made an eye movement between the two of them wouldn't it? So somehow there's some mechanisms in the brain that force you, force you, I guess that may not be the right word, achieve the ability to select one or the other of the visual targets accurately and not be vector averaging it. Now to accomplish that logically speaking, what you need is some inhibitory circuits and the mechanism for selection and then a decision as to where to look. All right so to highlight that let me just say one more thing about electrical stimulation. This is true what I told you about vector averaging with electrical stimulation even when you stimulate the two, the two superior colliculi or two different locations in the brain each of those gives you an eye movement into the left or right hemifield and yet they also vector average. So now let's look at what happens when a similar situation is used in a real experiment where the monkey looks at visual targets and so what you do here is you present these two targets at the same time. And so the monkey in a sense has to make a decision or you have to make a decision as to where to look. And as long as there's a nice big separation here which is 90 degrees in this case, what you can see here are the real eye movements that half the time the monkey looks to the left, half the time to the right and it makes very very few in-between saccades. Whereas if you electrically stimulated at these sites in the colliculus you'd get a vector average saccade just like what I've shown you here OK? All right so now that being the case, we can move on and ask some additional questions about what happens when you bring the two stimuli closer together, all right? The closer together they get the more difficult it is to make an independent decision as to whether to look to the left or the right target that comes up above the fixation spot. So here's an example of that, actually what I'm going to do is this. Let me delay for a little bit what happens with that and first I want to tell you about what happens with various kinds of lesions that you make on eye movements and then we'll talk about this question of the angular separation between the two visual targets. So therefore let's do-- the reason I want to do this first is to give you sort of a real sense when you do very informal kind of testing and by informal testing what I mean is that you're going to see a monkey actually perform eye movements OK? So here we are, here's a monkey, here's his brain so to speak and this is what we call of course an intact animal and now what I'm going to do, I'm going to start the movie for you to see what kinds of eye movements he makes when we present some apple pieces for the monkey to eat. OK are you ready? Here it is. Can everybody see this OK? So you can see that when an apple piece appears the monkey makes a saccade to it and grabs it and stuffs in his mouth and eats it. OK so that's what a normal monkey does with his fully intact, functional brain. So now we're going to ask the question, what happens if you take out the colliculus on one side, meaning a unilaterally lesion, what happens to the eye movements the monkey makes subsequent to such a lesion? Are you ready for that? Here we are. It's the same monkey. You can see that his eye movements may be a little bit slower perhaps but he still looks to the left and to the right quite well. He tends to sort of look towards the side of the lesion when there's no stimulus OK, but other than that he seems to make really rather, rather good eye movement in spite of the fact that he has a colliculus missing on one side. So what that means is that just looking at what the monkey does in this qualitative manner is not going to tell you too much about what these various structures do and so consequently you have to carry out some more refined experiments to determine what kinds of deficits do arise when you take out the colliculus, you take out other cortical structures for the generation of eye movement. So let me first of all tell you about a really interesting finding that was made that had nothing to do with the colliculus at the time it was made, was a strictly behavioral study much of it done on humans. What was done is first of all a fixation spot came on and then a single target appeared and often what was done is that the fixation spot was turned off just a few milliseconds before the target came on and on each trial it appeared someplace else OK? So then you've collected a lot of data to see what is the nature of the monkey's eye movements and in particular that initial study examined by Fischer and Boch, examined what the latency distribution was of the eye movements made and they made an incredible discovery that they subsequently generated hundreds of studies published in numerous journals, and this is the discovery. Here we have the latency of saccades made and here's a number of saccades and what you get is a bimodal distribution of saccadic latencies, amazing. They call the first mode, which takes place in the latency of average latency for bright stimulus over about 100 milliseconds, called express saccades. The second mode they called regular saccades and that took about I don't know, 135, 140 milliseconds on the average. So that got this bimodal distribution and so people said, my goodness what could this be due to? How do we explain this effect? What other cortical or subcortical mechanisms that give rise to a bimodal distribution of saccadic latencies? As I've said a huge number of experiments had been carried out to find this out. So when one group of investigators saw this phenomenon and they did experiments with monkey's and they found this is a real effect everywhere, these findings were initially made in Germany and even, even in the United States you get a bimodal distribution of saccades in both humans and monkeys. So once it was established as a really solid effect people began to speculate as to what are the neural, underlying mechanisms? And as I've told you earlier it was noted that based on those lesion studies of the colliculus which eliminated saccades from the posterior portion of the cortex but not from the interior, that there is a reasonable hypothesis involved in proposing that you have a posterior system and an anterior system. And so when people saw this they said aha, now we know why we have these two systems. One is for making rapid saccades and the other is to making regular saccades. And so they proposed that the posterior system does one of these and the anterior system does the other. Well that was a nice hypothesis but so often when it comes to hypotheses as to how the brain works, often most of the time the hypotheses end up being wrong and so what you need to do rather than just hypothesize is to actually carry out experiments to test the hypothesis. So what is the test of this hypothesis? What would you do as an experimentalist? Well, yes. AUDIENCE: You could ablate either posterior or the anterior pathways. PROFESSOR: Very good you could ablate. All right so one thing you can do first of all you can ablate the superior colliculus even though when you saw that just with regular easy test just filming it, you didn't see much of a deficit. You sort of sense that maybe the monkey's a bit slower but other than that it wasn't that clear. But if you take out the colliculus then you eliminate the posterior system in essence all right, and so the question is what happens when you do that? And then you can ask the question what happens when you take out some other cortical areas? So let's look at this. Here we have a monkey 10 weeks after the colliculus has been ablated, we're talking about a major effect here that doesn't recover, and you took out the colliculus on one side on the left OK, so that controls rightward saccades and therefore when you look at the leftward saccades that's to the intact side of the brain you get the usual bimodal distribution of saccadic latencies. But then when you look at rightward saccades Lo and behold, you don't see a single express saccade and even the regular saccades have a longer latency than the ones to they intact side and these are collected at the same time. The monkey's sitting there with his head fixed and sometimes a target appears on the left sometimes on the right and you collect hundreds of thousands of trials that way and you test this over various time periods and this is at least 10 weeks, 2 and 1/2 months after the lesion and even if you test the monkey a year later you still get the same effect. So this clearly points out the fact that your ability to make these rapid reflex-like saccades is something that's got to go through the superior colliculus OK? So now we can ask the question, well, what about if one makes lesions in the frontal and medial eye fields? All right so think about that for a minute what would you predict? All right so here we go. We take out the frontal eye fields in this case, ready? Oops let me go back. We take out the frontal eye fields but to give you a sense of this overall effect first of all let's just do the same informal testing that I've shown you before with the colliculus and see what the movie looks like OK? Here's a monkey not as handsome as the other one, but what you do here is a person behind there sometimes just presents a target and sometimes just moves it around so the monkey tracks it, can you see that? And what I shouldn't have told you which one I feel have been ablated. Look at how nicely he tracks on both sides, grabs it, puts it in his, stuffs it in his mouth and so the monkey seems to be perfectly fine with making saccades to either side and perfectly fine making pursuit eye movements either direction, indicating that this informal testing doesn't reveal anything truly obvious about the deficit that you have in eye movement control when you take out the frontal eye fields. So therefore you again need to go on to carry out some more careful experiments to obtain some detailed quantitative data. So let's look at that. First of all let's go back and look at express saccades. You take out the frontal eye fields and Lo and behold, you still get express saccades indicating that that is definitely specific for the posterior system even further if you do this experiment what you find if you take out both the medial and frontal eye fields you still get it, you still get your express saccades. So clearly these two areas are not directly involved in generating quick rapid saccadic eye movements and it's still bimodal so that initial hypothesis that the first mode is the colliculus and the second posterior system the colliculus and the second mode is the anterior system is clearly totally wrong. All right so that's what then happens and once this has been done, you say, well, you've got to find something that the frontal eye and the medial eye fields are doing. So let's come up with some other experiments to see whether the monkey is selective for anything else when it comes to lesions of the frontal and the medial eye fields. All right so one thing that had been proposed is that maybe the important factor is some high level eye movement activity such as making saccades in quick succession to successive targets that are out there OK? So the way that can be done quantitatively is you have the monkey first fixate OK, and after they fixate you present the target and then you present the second target. So the monkey has to make two successive saccades and what you can do is you can vary the temporal delay between their succession. So the monkey then has to make a plan to makes two saccades because these two stimuli appear before the monkey starts his initial saccade when the temporal interval is short. So then the monkey somehow knows it's easier to do, he has to do this even though he starts a saccade only after the two targets have come on when they are presented indeed in the short, the short latency. So that's what happens and of course in different trials you have different kinds of pairings like that and if you do that what you find is very interesting here are the four conditions, this shows the monkey's performance 18 weeks post left frontal eye field lesion and this shows it 60 weeks after and it shows this to the intact side and the side where the frontal eye fields are missing and what you can see very dramatically is that the monkey really has difficulties in making a plan to execute two saccades in a row. Quite dramatic effect significant [INAUDIBLE] one level, of course indicating that the frontal eye fields play a role in planning sequences of eye movements. Now then another, well this I want to show you, what happens with the, when you compare the effects of the frontal eye field lesions and the medial eye field lesion and this is done over various sequence durations and for several weeks and what you find is that in both cases there is a recovery but the effect is much, much more dramatic with the frontal eye field lesion than a medial eye field lesion. So indeed there's for the frontal eye fields we can say the frontal eye fields play an important role in planning sequences of eye movements which is sort of a high level activity in executing saccadic eye movements. Now then what we can do is to examine what about making a decision of where to look when more than one target comes up and the simplest form of that is that you present two targets like that OK? And so the monkey has to make a decision am I going to look to the left or to the right? And then what you do is you can vary the temporal delay between the two like that, or like that OK? And then you can do that again either to intact parts of the visual field or those where either frontal eye fields or some other structure is missing. All right so let's look at what happens. Here is what we have is the intact monkey this part I've shown you before. All right this is when the two targets come on simultaneously and in this case the left target comes on 34 milliseconds before the right and here's the reverse and what you can see that even a 34 millisecond delay causes the monkey to very much prefer to go to the, to the target, make a saccade to the target that had appeared first. So that's what happens in a normal intact monkey. Now we can ask the next related questions still in the normal intact monkey, what happens when you put the two stimuli closer together? OK so here's an example they're, now they're separated only by 40 degrees and here the data again it shows that when it's, in this case 67 milliseconds apart the monkey chooses almost exclusively the target that comes on first. When they're simultaneous what happens is interesting, you get some so-called vector average saccades that you always get with electrical stimulation. So those you get, it's still a minority of the saccades. And the closer you bring the two together, the more frequent will be the vector average saccades. When they're only separated by set 10 degrees they will be all vector average saccades So this is what happens in the normal monkey you get this nice bimodal distribution and vector average saccades when the two are simultaneous. Now let's ask the question what happens when you take out a cortical structure in this case the frontal eye fields and here we are you take out the left frontal eye fields. These above are the same the data I just shown you adding a bigger delay here, just to make it even clearer that by that time the monkey never chooses the target that comes on second. And here we have the monkey after left frontal eye field lesion and look at what, what happens. What happens is that the monkey at 0 chooses 100% saccade to the intact side OK. There seem to be a little bit of a shift here and then when they're 100 milliseconds, then you have an equal distribution. So to equalize the choice that the brain makes you have to now because of the missing frontal eye fields, present one of the targets a 100 milliseconds earlier on the affected side to get the same kind of distribution that you get in the intact monkey. So this further highlights the fact that the frontal eye fields play a significant role in making decisions about the selection of visual targets. All right now we can look at this more quantitatively. This shows the distribution of choices OK, to the left target, this is the intact monkey preoperative and then when you take out the left frontal eye fields it's a huge movement over the equal choices has to be about 130 milliseconds separated with the affected side getting the target a 130 milliseconds earlier. Then if you keep doing this there's some recovery but even four years later you still have a huge effect. Now you can ask the question what if you do the same experiment in the medial eye fields? And so if you do that you first of all get a small effect to begin with and after just 16 weeks there's full recovery. So obviously the medial eye fields don't, do not seem to play a central role in making decisions as to which target to look at when more than one target appears in the visual field. All right so now having done that quantitative work indicating that the frontal eye fields play an important role in target selection and the sequencing of eye movements. We can move on and ask the question, well we talked about the posterior system and the anterior system. Is it true that there are these two systems? Or are there many more systems that we're not aware of? This should remind you of the fact that it had been proposed when we talked about extrastriate cortex that area of V4 essential for processing high level activity including color, whereas the area MT and MST play an important role in emotion and so it was purported that these two major systems in the posterior cortex, the medial and lateral if you will. So the question was raised if you remove both of these areas, meaning the gateway to these areas, V4 and MT what happens? And when you do that they still the monkey's able to do a lot of things indicating that we have more pathways from V1 than just these two the anterior, the medial and the lateral. So now the question is what about when it comes to this eye movement control? We should do the same kind of experiment. We should remove the colliculus that supposedly eliminates the posterior system and then we should remove the frontal eye fields bilaterally, that eliminates the anterior system. So now the question is what happens when you do that? And again, we going to turn to the informal test, meaning we going to, we going to take a movie of the monkey who has these areas removed. Think about it for a minute what do you think's going to happen? OK so here's the monkey. Ready? The monkey sees well, he makes his movements with his hands quite accurately but what, what happens to the eyes, you watching the eyes right? No eye movement. Everybody see this? Should I show it again? The monkey makes no eye movements. He cannot make eye movements because his natural tendency would be just like I'd seen in the previous movies that he makes eye movements to the apple pieces so he can grab them. He still pays attention to them everything is fine except he doesn't move his eyes because he can't move his eyes as a result of having eliminated these two systems and because of that we can say with confidence that it's, as far as the visual as far as the ocular motor system is concerned we indeed have these two major pathways which when they're eliminated eliminates their ability to move, make saccadic eye movements. OK now let's go back to the effect of electrical stimulation to gain further insight about what these various areas do and what you can do in these experiments is not only to stimulate at a high level to elicit a saccade but you can simulate at a lower level and then pair that with the appearance of a visual target. This then enables you to see whether there's summation here or if there's interference, so let's look at that. All right here is again a monkey brain and what I'm going to tell you is what happens when you do this experiment in V1, in LIP, and in the frontal eye fields, even maybe I think I may have something also in the medial eye fields. All right so let me describe the experimental procedure for you so that you understand how these kinds of experiments are conducted. You put an electrode in to any of these structures and you find the receptive field first all right? Once you've found the receptive field you electrically stimulate then you confirm the fact that the electrical stimulation brings the fovea into the receptive field of the simulated neuron's when you do this in all the areas except in the medial eye fields. Then what you do is you actually present the visual target there OK? And lastly you present two visual targets and you electrically stimulate to see how it biases your choice as a result of this in most cases sub-threshold electrical stimulation. So if you do that what you find is, first of all, if you do this experiment with a intact monkey without stimulation you find say a receptive field here and then you simply see what the monkey's choices are left to the right and what you plot here is the saccades made to the target in the receptive field and what you can see is what I've shown you before namely when the targets are simultaneous then the monkey chooses left or right with equal probability. Now we're going to add the electrical stimulation and ask the question can we shift the curve to the left? If you shift it to the left, that means that the electrical stimulation facilitated the choice and if you shift to the right it means that it caused interference, it lessened the chances of the monkey making a saccade into that area, meaning the stimulation created inhibition. Got it? All right, so here's an example in the lower layers of V1 remember in layer six of area V1 is where you have your complex cells that project down to the superior colliculus. So here if you stimulate at sub-threshold levels and some of these are very low levels only 7 1/2 and 10 microamps. OK, really very, very fine currents using themselves going to elicit an eye movement. So if you do that the stimulation created a significant, highly significant facilitatory affect. This is in the lower layers. Now if you do the same experiment in the upper layers you get the opposite effect we get a gigantic even-- look at this five and 10 microamps, --even at that low, incredibly low level you get a gigantic interference effect. So that says that there's a complex interplay in V1 in the decision process that arises as to whether you're going to look at a target, or whether you're not going to look at a target. Now we can do the same experiment in LIP and in some regions you get this huge facilitatory effect and in other regions you get an inhibitory effect, so that's LIP. So therefore this structure also plays a significant role in deciding whether to look at or not to look at a visual stimulus and here it shows that in LIP as you increase the current you get a gigantic increase in the latency in these inhibitory areas with which a saccade can be generated indicating that LIP plays an important role in whether you're going to look at a target or whether you're not going to look at target. OK and then if you do the frontal eye fields everywhere in the frontal eye fields you get a huge facilitatory effect and in the medial eye fields you also get a facilitatory effect as long as the motor field is where the visual target appears. But now you can remember what I told you about the medial eye fields the fact is that they have a place code. So one can do a different experiment in which instead of presenting the target in the motor field you can present the fixation spot there. So here again to remind you, we have this place code. So now we do this experiment, just like what I've shown you before just to repeat it, in this case the target appears and that causes a facilitation in this case we put the fixation spot in there and the location of the target's just displaced and when you do that you get a huge inhibitory effect because somehow the electrical stimulation forces the animal keep the eye at the location where the motor field is in the medial eye fields and this is a very important point and it means that the medial eye fields plays a significant role in deciding how long to look at a target before making the next saccade. OK so now we going to, there's a lot of facts, so and you're going to get some more lot of facts. So now I'm going to summarize what I told you about the effects of sub-threshold electrical stimulation OK? If you stimulate in the upper layers of V1 and V2, V2 I should add to this you get interference, in the lower layers you get facilitation in V4 there was no effect I didn't talk about that before in LIP you can get both facilitation interference and also fixation increase. In the frontal eye fields you get facilitation and the medial eye fields depending on how you set it up you can get facilitation if the target appears in the motor field and you get inhibition when it appear, when the fixation spot appears in it. So that's the basic summary of these effects. Now what these findings indicate that somehow inhibitory circuits are essential in our ability to make saccades to selected visual targets and therefore what we want to do is to examine what happens when you use various kinds of pharmacological agents that either facilitate or increase inhibition. So to explain that then let me first of all point out to you that if you take again the whole brain and you look at the colliculus and you look at V1, you look at LIP, you look at the frontal eye fields you can study these areas by injecting two kinds of pharmacological agents, bicuculline and muscimol some people believe you've heard of this as muscimol I call it muscimol. At any rate you all know I'm sure but this is bicuculline is a GABA antagonist meaning that if you inject it OK, it stops the effectiveness of inhibition by GABA. Muscimol on the other hand is a GABA agonist meaning that if you inject it you increase inhibition all right? Now I'm sure that all of you must have had enough of a background in biochemistry to know what these two agents are. So now what we can do is we can inject either one or the other of these agents and assess two things. First off we assess eye movements and secondly we also have to obviously assess how it affects your visual ability. So let's move on and do this. The first set of experiments done with this was by Hikosaka and Woods many years ago in the colliculus. So what they did, very clever beautiful experiment, is that they would put a microelectrode into the colliculus and then initially just stimulate to see what kind of eye movement you get and then they would inject either muscimol or what's the other agent? AUDIENCE: [INAUDIBLE] PROFESSOR: Very good and see what happens. Now which of those causes more inhibition? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, so let's look at that then. If you electrically stimulate just like I've shown you before you get a constant vector saccade no matter where the eye starts you get the same vector. Now the question is what happens to the spontaneous eye movements of the monkey when you first inject muscimol OK? Which is an agent that mimics if you will your GABA, meaning it increases inhibition and what happens is that the monkey hardly ever makes a saccade with the vectors represented by the area that has been injected. By contrast if you inject bicuculline the monkey keeps making saccades with that vector even when there's nothing out there like the release of area for inhibition. And the signal is sent down to the brain step move your eye, move your eye, move your eye, move your eye, that's what happens. So to show this in more detail then we can ask what happens when you use two behavioral tasks OK, the so-called paired target task that we already talked about quite a bit and you talk about a visual discrimination task and that one is that you're already familiar with the so-called audited task you present several stimuli one of which is different from the others. All right so here is an example of the paired target task. What we do here again we vary the temporal asynchrony between the two targets just like what I've shown with the electrical stimulation and this is the monkey's normal behavior when the two targets are simultaneous the monkey chooses each randomly. Now let's ask the question what happens is again to remind you, if it goes this way the curve goes this way it's facilitation if the current curve goes that way we get interference. So let's ask what happens first of all, no let me also tell you about the oddity task just to make sure that you have it. All right that's the oddity task you know that already. All right so we can now go on and first examine what happens with muscimol or muscimol or whatever you prefer to use OK? Well again what, what does this agent do? Does it do excitation or inhibition? AUDIENCE: Inhibition. PROFESSOR: Very good, inhibition because it mimics GABA all right? So let's look at what happens. Here we have a normal monkey same experiment as before, here is the this is V1, here's a receptive field you present one visual target there, the other there, then you vary the temporary asynchrony between them, this is what happens in the normal case OK? Now you're going to inject the muscimol and ask yourself the question, we increased inhibition what do you think is going to happen? Well it should be pretty obvious, what happens is you get gigantic inhibition the monkey practically never looks at here because that area is not being activated by the visual stimulus because of the inhibition. Then if you do this over time, even four hours later there's a huge, huge effect but by the next day the monkey recovers luckily so one can do this experiment several times because this agent is something that washes out of the brain. All right so now let's look at what happens in the frontal eye fields. Same experiment, all right, just a different location with the electrode. And what you get, this is your preinjection. And once again, after the injection, you get a big interference effect which recovers by the next day. So that is what you get that with that. Then let's examine what happens in LIP. Curiously in LIP there was no effect. All right. So now let's next turn to the so-called oddity task meaning several stimuli, one of which is different from the others and what is the monkey's ability to choose the different stimulus. So if you do this with a muscimol injection there's a huge deficit for V1 that you would expect because it destroyed the monkey's ability to analyze the visual stimulus that appeared in the receptive field. Then if you do the same thing in the frontal eye fields you find a mild deficit and then if you do the same thing in LIP you get no deficit at all. So this is what then happens with the oddities task and now what you can do is ask what happens when instead of muscimol we're going to inject bicuculline? And we're going to go through the same procedure as what I just shown you and so we start here in V1 and when you do that think about it for a minute, now you're facilitating supposedly because you're eliminating inhibition what do you think you would get? Well, you'll be in for a surprise, what you get is gigantic interference again because putting bicuculline in also screwed up you're ability in the visual cortex to analyze the visual percept. This again recovers over time by the next day it's back to normal. Then if you do the same experiment in the frontal eye fields what do you think's going to happen? Can you predict what's going to happen just looking at this slide? Why do you think this is such a big empty space here huh? OK well look at that, when you put in bicuculline we get this incredible facilitation. OK the monkey just like in the colliculus barely can help himself to make a saccade into the field that has been disinhibited OK?, and that again recovers, bicuculline is washed out more rapidly than muscimol and by the next day certainly is back to normal. Now we can do the same thing just looking at the eye movements themselves to further highlight what I've shown you before. If you'd put bicuculline in OK, in the frontal eye fields the monkey cannot help but make saccades with similar vectors that are represented by the neurons in the injected site and that's why you have all this is its full stack of saccades buh, buh, buh, buh, bang the monkey just can't help but makes saccades because the signal to make the saccade has been disinhibited. All right so then you can ask what happens in LIP and when you do that you get no effect at all, and if you do now the bicuculline injection with the oddities task we can once again ask well what happens with that? And then once again as I've already indicated, both bicuculline and muscimol cause a major interference in your ability to select the odd target, in other words to visually discriminate, but if you do the same thing in the frontal eye fields and in the medial and LIP you get no effect at all. So what this then says to summarize all right is the following. We talk about target selection, which was a two target task, individual discrimination which was the oddities task and here we have muscimol and here bicuculline. So if you do that in the frontal eye field, in V1 first, you get interference, you do it in the frontal eye fields you get interference of muscimol with great facilitation, with bicuculline and LIP has no effec. And then, just to remind you, it has already been shown by Hikosaka and Woods, that you get interference of facilitation in the superior colliculus with these two agents. You do the same thing with visual discrimination, you get a major deficit in V1 for both and you get a mild effect for both actually no effect really with bicuculline in the frontal eye fields and no effect at all in LIP. So these manipulations then gives you sort of a sense of what these various areas do in the generation of eye movements that involve not only just to make a saccade to a target but to select targets individual field, make a decision as to where to look, and also to decide when to look, because any time you make an eye movement and I should've mentioned that more thoroughly before, you look at something and how long you look at it depends on how long it takes you to analyze what you're looking at. Now in most cases it takes you maybe I don't know 200 milliseconds or less to say, oh yeah that's letter a or whatever and then you say it's your brain mechanisms tell you, OK, now you know what it is. It's OK for you to move your eye. OK so that's involved and then thirdly, you have an important task in making sequences of eye movements so that when you look at a picture like the movie I showed you in the beginning in the previous lecture, when you look at something then you make a decision as to where to look next, where to look next and if you keep doing this for I don't know 20, 30 saccades then you get to a state where you say oh now I understand the picture as a whole. All right so now we're going to summarize what the tasks are in a very simple situation and what brain structures are involved in it. First of all let's imagine that you're looking at fixation spot designated by a. Let's assume the two stimuli come on and that means that you have to make a decision as to what those two targets are, you have to identify them because you have to select one of those OK? Of course in most cases you're talking about many targets that are out there this is a highly simplified version. The next step is to decide which of these two targets now that I know one says a and the other says b, which of those two targets, actually b and c in the picture here which of these targets should I look at, and so you make a decision all right, that generates excitation and inhibitin, inhibition through excitatory and inhibitory circuits, and then you decide OK we going to look here and that means that you have to decide which one not to look at in addition to deciding which one to look at, because you've got to make an accurate saccade so you don't want a vector average. All right. And then what you need of course is a map, if you will, of the motor field so that you can generate the appropriate direction of the saccade so that you can decide where you're going to look at and then lastly as I've mentioned already before, you also have to make a decision as to when to make that eye movement. So now in a very summary fashion we can talk about the various brain areas involved. Quite a number of different areas are involved in the decision as to what these two stimuli are, they of course involve much of the visual system including also LIP and several other areas. Then, then you have to make a decision as to which one to look at, again several areas involve notably among them are the frontal eye fields, LIP, and also the medial eye fields. Then you also have to decide which ones not to look at, that it was largely the same areas and then you need of course a topographic arrangement to know where things are and that you can find in many areas, including V1, V2, the frontal eye fields, and the colliculi which are laid out in a nice topographic fashion, and then lastly LIP is important for that, and I think to some degree also the medial eye fields but I'm not sure about that to decide when you should generate saccadic eye movements. Well that's very nice and makes you realize that even though we never think about making eye movement all this stuff is going on three times a second it's amazing. So now you're going to look at what the various visual areas and ocular motor areas are that play a role in this OK, and so we're going to create a summary diagram. So here we have, that's the first one I shown you, which has a rate code that from the brain stem connects with the eye muscles and activates them. Then we have the superior colliculus, now the superior colliculus is under strong inhibitory control all right, and they're several inhibitory circuits that are involved in that and those include the substantia nigra that sends inhibitory circuits down to the colliculus that then prevents the colliculus from generating an eye movement because it is under inhibition, because every time you look around thousands and thousands of impressions impinge on the colliculus and you only want one of those to actually get down to the deep place, the colliculus, to generate an eye movement. Now then the substantia nigra is under the control of several neural structures which, many of which go through this so-called basal ganglia. Now we can expand on this and look at the visual input to do this. We already talked about this a lot you, pointed out to you that you have these three major, many, many different types of ganglion cells. The three major ones we talked about are the midget parasol and the so-called W-cells that go to the cortex. And the W-cells also project directly to the colliculus. Then from layer five in the visual cortex you have the cells that project to the superior colliculus and the intermediate layers and that down flow is controlled predominantly by the parasol system as we had described. Then we have all these other areas that V1 projects to, V2, MT, V4, and so on some of them dominated by input from the parasol system and others get input from both, and those in turn project to the parietal and temporal lobes and those in turn have an important influence on the inhibitory circuits through the basal ganglia and the substantia nigra. Now let me finally come to the frontal lobe, the frontal eye fields, and the medial eye fields and they in turn have direct access to the brain stem and also connect to the superior colliculus. Now this is still not the whole story because what you have in addition is a bunch of interconnections among numerous cortical areas they talk back and forth to each other that enables you to make these decisions as to where to look next. Now if you think this is complete you're still wrong because now what you have to realize is that all this circuitry is also one that receives input from several other areas, with the auditory system that you're going to hear a lot about later on in the course, the somatosensory system, the olfactory system, the smooth pursuit system that we'll mention a little bit next time, the vestibular system, the accessory optic system we'll talk about next time, and the vergence system. So all these fit into this incredibly complex circuitry already and essential elements in your ability to move your eyes about. So I think you need to realize therefore that something even as simple as just moving your eyes about it's an incredibly complicated system involving many structures and involving excitatory and inhibitory circuits, interconnections it's just, it's almost dumbfounding. So that then is the essence of these connections, and what I want turn to next I think we still have a little time. Before I summarize our results for today, I want to say something about dreaming and rapid eye movements. I think all of you know that every night you sleep you dream and what you also know that has been discovered more recently is when you dream you make rapid eye movement so it's called REM sleep, and so the question arose why do we dream? Why do we have REM sleep? Now, the major influence as to why we dream comes from the work of Sigmund Freud who published a famous book one of his really great works called, The Interpretation Of Dreams, which was the original version in German was published in 1900. OK so 113 years ago. Now this was an incredibly influential book and also central for the emergence of psychoanalysis and has been used extensively to interpret quotes why we dream. Now the prime, in very summary fashion, the prime idea that Freud expressed is that dreaming is equivalent to wish-fulling your dreams, your wishes, to fulfilling your wishes. So that's why it's called wish fulfillment dreams. Now, that's interesting because one of the stories he has in that book of his, Interpretation Of Dreams, is a woman he was psychoanalyzing who one day when she came to be, to her session said you know you told me the other day that dreams are wish fulfillment's. She said, I don't believe that, I had a dream last night and it didn't go along with wish fulfillment and Freud said, I want you to tell me what the dream was about. Well, my dream was that I went to a store to buy some food because I was going to have a dinner party and when I got to the store it was closed and I could, as much as I wanted to I couldn't buy the food for the dinner. Freud scratched his eyes, you know I'm making that up, and he said that's true, he said you know what? You didn't want to give a dinner party that's why you dreamt that. So that's famous psychoanalytic stuff. You can always twist things around so that it fits with your hypotheses, in this case Freud felt that indeed even though she had this, what she thought was a contrary dream it was a wish fulfillment dream. Well that's one part of dreaming, what the other part that Freud had emphasized is that when we dream so many of our wishes are actually unacceptable to ourselves and therefore we dream it at night and so he constructed in many other studies the idea that in humans we have three subdivisions of the mind. We have the Id, the ego and the super ego, you all know that right? So what it means that when you dream at night some of the wishes that you, unacceptable wishes that aren't in your Id, kind of seep through because the super ego is not under control since you're asleep. So that was his basic idea and of course it was then many years later discovered that whenever you dream you make all kinds of eye movements. You don't make eye movements when you don't dream but when you dream you make eye movements. Now one of the problems with the Freudian theory is that animals also dream and in fact most dramatically, animals that hibernate do a lot of dreaming and not only dreaming but those animals also move their eyes about a lot OK? So that observation then kind of shifted the notion as to why we have REM sleep and so people thought about that and one observation that had been made is that when you eliminate a persons ability to move the eyes, such as you lose somehow the ability to activate your eye muscles, this can be done also in monkeys. What happens in fairly short order is it your eye becomes ill-affected and what I mean by that is that the eye loses it's perfect roundness to some degree and more notably even what happens is that your cornea becomes uneven, becomes ridged, because you're not moving your eye. So that discovery then has led to an alternate theory about why we dream which is not nearly as romantic or intriguing as Freudian theory, namely that we have REM sleep at night in order-- and especially have it in animals that hibernate-- in order to keep the eyes in healthy condition and to keep the cornea nice and smooth and even because if you were not to dream at all and you would sleep eight or 10 hours. Then you would have an uneven, would result in having an uneven cornea that would make it more difficult for you to see and the reason for this then is that animals that, which presumably don't have Ids, egos, and super egos also dream as do animals that hibernate and so there's a necessity to move your eyes while you are sleeping or hibernating. So that's an alternate theory and we'll see one of these years whether they're correct but I can tell you, which may not be very nice, that basically Freudian theory has taken quite a nose dive and in fact today psychoanalysis has become largely dead for a number of reasons, and so you don't have too many psychiatrists or psychoanalysts out there performing those psychoanalytic tasks that they had, in which a patient lies down on a couch and sort of free associates and sometimes even gets hypnotized to talk about some of his unconscious wishes and so on. So anyway that's the story for today and that brings me to an end to eye movement control. I hope you'll appreciate the fact that even a simple system like eye movements is unbelievably complex when it comes to the brain controlling it. Next time we're going to talk about eye movements and towards the end of that I'll come back a little bit and talk more about yet another aspect of eye movement control. Does anybody have any questions? Yes please. AUDIENCE: Can you just explain again quickly why the bicuculline injection in V1 causes interference? PROFESSOR: OK, that's a very good question. Why does bicuculline in the, in V1 cause interference? Because it screws up the neurons ability to analyze the visual scene. You mess up the centers around antagonism, you mess up the orientation selectivity of these cells, or direction selectivity of them, so they're no longer able to analyze the visual scene in the normal fashion. Yeah that's, and, it certainly you see, the V1 is quite far removed really from the generation of a motor response. If you inject GABA inhibitors into areas which are closely linked to the execution of motor acts then the, the effect is seen because you generate a motor response or if it's muscimol then you inhibit the motor response but the frontal, you see this in the frontal eye fields but you don't see this in V1 because V1 is predominantly the system that analyzes visual percepts, as are V2 and V4 and all those other higher cortical visual areas. Any further questions? All right very good so I will see you then next Monday and I think you'll find that we'll have an interesting session. We're going talk about movement and we're not going talk about regular movement, we're also going to talk about confusing kinds of movements, like especially very important apparent motion. You realize and just to say one more word here, that nowadays when you go home and watch television almost all the motion that you see on the TV is apparent motion, not real motion and I-- keep thinking about that-- and I'll tell you about that next time. AUDIENCE: Monday is actually a Holiday. PROFESSOR: Oh, yeah. Sorry it's not Monday. It's our next session is next Wednesday. Sorry about that.
MIT_904_Sensory_Systems_Fall_2013
6_Adaptation_and_color.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today we are going to discuss color vision and adaptation. About 2/3 of it's going to be color vision and one third on adaptation. Now, I'm going to have several demonstrations on the screen here for you. And I would like to forewarn on you that for some reason, they still haven't fixed the light bulb in this projector. And this, you see this is a bluish color to it, speaking of color vision? And actually, it's supposed to be gray. But there's some loss of balance there in this bulb. And they've promised to replace it some 10 days ago and it still hasn't happened. So once we come to the demonstrations, they're not going to be perfect. But they will also be available on the internet, as well on Stellar. So you can look at them there and maybe you'll get a better picture of them there than here. But we'll do the best we can. So anyway, let me first give you a list of things we are going to discuss and the questions we're going to pose. First of all, you're going to ask, what are the basic facts and laws of color vision? Now, one of the nice things about color vision is that their number of laws-- it's a very basic phenomenon. And its in many ways very close to physics. The second one is, what are the major theories of color vision that we are going to discuss? And then we are going to examine how color is processed in the retina geniculate. Then we're going to move on and examine what happens in the cortex. And then we are going to discuss what is the nature of colorblindness, which I think will be of interest to most of you because colorblindness is not that uncommon, unfortunately, among humans. And then we're going to look at how adaptation is achieved in the visual system. That's when we switch from color to adaptation. And then we are going to ask the question, what are afterimages? How are they produced? And what are its effects? So let's begin, then, with color vision. And the first thing I would like to say about this is that as so often happens in the course of history, often there had been great misconceptions about color. And one of the great misconceptions was that people thought that white light is the pure light. And that was exemplified in the fact that before the 20th century, for example, most nuns were required to wear a white outfit. They were asked to wear the white outfit because it meant that they were pure. So what happened then in the 1600s, one of the greatest geniuses of our time came along. That was Newton. And at that time-- this is just an interesting coincidence-- it happened that the art of making chandeliers has emerged. And the chandeliers in those cases consisted of little pieces of glass cut in various ways. And people noticed, and so especially did Newton, that when you looked at these chandeliers, you saw all sorts of colors there. And so Newton said, my god. How can that be? What's going on? And so he began to analyze color, which also came about then for him because that's when they also, in addition to the chandeliers, they came up with prisms. And so what Sir Isaac Newton did was-- let me skip this for a minute and I'll come back to it. What he did was that he put a little opening in a screen and let the light come through from the sun. White light. And then he put that beam of light through a prism. And he discovered that he got an image much like what you see when you have these chandeliers. Namely, that you get all sorts of colors projecting out of the prism. And then he performed yet another little experiment. He added another prism, same kind of prism, where the light was separated from the red. And then there was no further separation. That was a remarkable discovery on his part. And he became interested not only in the physics of it, but he also became very interested in how we organize our color perceptions. And he was the first person to come up with what I will talk a lot about, the so-called color circle. So this is what Sir Isaac Newton then came up with. And he established that we have a huge range of frequencies. And a very narrow section of it right here is the one that falls into the visible range. And if you break that up like this and enlarge it, much like the colors of a rainbow, you see all of these colors. And so the conclusion to which he came is something that was quite remarkable. He was at that age just 29 years old. And that was done-- to go back here-- at the age of 29 in 1672. And so he concluded at that time that white light is a mixture of all the colors. It's white because it's an equal mixture of the different wavelengths here. So rather than being pure, white light is a conglomeration of all the little wavelengths in the visible range that the eye can process. So that was so stunning for him at the time that he delayed publication of it for more than 30 years. And then when he published this, this was extensively debated even 30 years later. And one of the people who debated it a lot was a famous German poet called Goethe. You probably all of you know who he is. And he said, Newton is a charlatan. He just made this up. It can't possibly be so. White light is pure. And so what he did, he took a prism just like Newton did. But instead of reflecting the light, he looked into the prism towards the light and he didn't see any colors. And he said, Newton is all full of junk. And so he asked his associates at the time, Schopenhauer, who is also a very famous philosopher, and said, why don't you do some experiments? Let us prove that Newton is all wrong. And so Schopenhauer did the experiment right and said, oh my god, Newton is right. What am I going to do? How can I tell Goethe, my boss, that he's wrong and Newton is right? So that became quite a thorn in his hide at the time. But of course eventually, we all came to recognize indeed that this is the situation, and that white light is a mixture of all colors. So let me now go back a minute to some of the basic facts, which I will elaborate on as we proceed. First of all, when we talk about color, there are all kinds of systematic things that we are going to discuss. And you're going to become educated about the processing of color as a result because we know a lot of very important basic facts about it. It's solid science. So when we talk about color, we typically make a distinction between hue, brightness, and saturation. Hue means what is the color. Is it red, green, and blue? Brightness is how intense the impression is. And saturation is that every color can be kind of washed out or it can be very sharp. Can you see a bright red? Or you can see a really washed-out red that's barely different from the background? This is very important. We have to make a clear distinction between the psychological and physiological attributes, or the physical, actually, attributes of color. So what do we mean by that? When we talk about color, that's an impression we have. That's our own personal psychological experience. But the scientific way to look at it is to call it wavelength. The same thing is true for luminance and brightness. Now before I go onto this, let me back up for a minute. Here, let me say one more thing about this because it's an interesting way to remember it and also it's relevant to what you are going to hear in the second half of this course, which is going to be on audition. So here's a classic question people have often been asked, especially when you were still in grammar school or maybe even in high school. It was, when a tree falls in the forest, and there's nobody around, does it make a sound? And so people debate this, and debate this, and debate this. Well, from my point of view, there's no question there at all. When a tree falls in the forest and there are all these cracks and everything, there is no sound because sound is a psychological attribute that we hear and interpret. There is, of course, the production of wavelengths as a result of the fall. So that's up in the air there. But you need a human being to turn that various frequencies into what we call sound. So now the next thing I would like to briefly approach-- I'm going to talk about it in much more detail in a minute-- is that it's been established by now, but initially there was quite a debate-- I'll come back to it-- that we have three major kinds of cone receptors in humans and in many primates. And actually, in some animals and some birds, there are actually four of them. And these three are the short, middle, and long wavelength cones. Of course, we can call those-- for short, we mean blue, for medium, we mean green, and for long, we mean red. And then we have the rods. And these numbers here-- this one is misspelled. bb? I don't know what bb is. At any rate, these are nanometers at which they peak. And we'll come back to that in a minute. But before I go on with that, let's imagine for a minute-- and we can skip this. Suppose that you are the emperor of the universe since the beginnings of time 1,500 million years ago. And you decided that you're going to create animals. And once you've created animals, you have decided that they're going to have to see things. And so they have to have an eye with which to see. And then you have to decide, well, if that's the case, color is very important. You looked around in the world and said, oh, all these beautiful colors. How are we going to have these animals and these humans see all the colors? And then you said, well, there are hundreds of colors. So what are we going to do? Are we going to create the receptor for every one of these colors and put them in the eye? And you said, oh dear, that's a problem because then we would need a gigantic eye. And so the question became, of course, how else would you come around this? And so here the idea, that here you have sensitivity, and then here you have wavelength. And so the idea was that you could create hundreds of very sharply tuned photoreceptors like that so you could get all the colors. Well, that did not seem to be a very good way of doing things. And so getting more back to the present time-- still a long ways off from the present, actually-- people began to hypothesize about, they say, well, what can we do to minimize the number of receptors with different sensitivities color and still be able to see well? And so one theory that came up before they knew anything about the eye and the three cone photoreceptors, Young and Helmholtz came up with the idea that if you had just three types of cones that are broadly tuned, they could take care of most of the ability to see various colors. And so that became a very interesting, very, very powerful theory. And actually, speaking of theories and models, it still is in my mind probably the greatest model or theory that has ever been developed about how the brain works because it subsequently did turn out that indeed, there were three of these receptors that are broadly tuned that can provide all that information for you. And that actually, as you will see, became a huge issue. Sorry. Let me go back for a minute to this. And let me just reiterate. These are the nanometers for the three types of cones. And at the end here, we have the nanometers for the rods. Then as a result of analyzing all this stuff, people have come up with all sorts of rules and laws. And that's what is one of the nice things about color vision. And one of the rules is called-- actually, laws in this case-- is called Grassmann's laws. And he said every color has a complimentary which, when mixed properly, yields gray. And I will explain this to you in just a few minutes. And the other is that non-complementary colors yield intermediates. Just keep that in your head for me until I fully explain it. And the other law, which I'm not going to talk too much about, is that the luminance of a mixture of differently colored lights is equal to the sum of the luminance of its components. So these are very, very basic rules-- laws. So here we go again, and move on. Here is what is called the CIE chromaticity diagram. And it was initially devised in 1931. And let me explain to you how this came about. This became an international undertaking. And it came about because it was highly desirable to be able to communicate throughout the world your particular color desire or experience. So for example, if you had a particular hat you had bought, say a blue hat of some sort, and then said now I would like to get a dress that fits it. How can I do that? Well, what you can do now as a result of this chromaticity diagram, there's a scale here. You can see the vertical values and the horizontal values going from 0 to close to 100. And then, if you can specify a particular color, say you want this color here, you simply can state what that color is by giving it the number. And then you can send that number to China or to, I don't know, to South Africa or something, to a particular company, and say I want to have a dress like that with that color. And then because this is international, they were able to produce that color as you specified on this diagram. So that was a very powerful undertaking. And this arrangement is such that actually, the colors go from the center outward, become more saturated. Think, for example, we're going to here to here. And the center of this, which is about 333 333 on the chromaticity diagram, is white, which is not that obvious here, partly because of the colors of the background. So that is the famous 1931 chromaticity diagram. And what one can do with this is to superimpose on this some rules about the human or, say, the primate color vision abilities. And the person who came up with this, as I've mentioned, was Newton. He came up with the so-called famous color circle. Now, the color circle is described here. And I'll show it to you in just a minute head-on. This is green, this is red, this is yellow, this is blue. And of course, the question is, why do we have them set up in this fashion? And we'll explain that in just a minute. So here is the color circle, the two-dimensional color circle, where things are pretty much equiluminant across it. Now, if you go from the center out, you increase saturation, as I had already said. And as you go around, you change hue. So those are the basic attributes. Now to anticipate the issues here, let me tell you this-- this is yellow, this is blue, this is green, and this is red. These are called the cardinal axes. As long as a line goes through the center, that's a cardinal axis. These are the ones which are best known, the red/green and blue/yellow. And the fascinating fact about this is that this explains a number of very interesting facts about our ability to see colors. So let me tell you this-- if you mix yellow and blue, as we have talked about the law, if you mix them in equal luminances, you get what's in the center here. You get white. Furthermore, and because of this, there is no such experience in our existence that's called yellowish blue. And there's no such existence in our minds, as far as color's concerned, that's reddish green. On the other hand, if you don't go across the center, but you say you had yellow here and you have green here, there is yellowish green. There is yellowish red. There is bluish red. And there's bluish green. So we can process those and see those in-between colors, but we cannot do that along the cardinal axes. So this incredible color circle essentially explains the very essence of how we can see color. Now, there's one more factor here, is that we also have to take into account luminance values. And so people turn this color circle for some purposes into a three-dimensional entity that's shown here. Here's a color circle. And the third dimension going up and down here, as you go from white to black, if you will, in the center. So here, things are brighter and here, things are darker. So that is sort of a complete, then, arrangement for your color impressions. But what we are going to do, we are going to concentrate on the color circle itself as we move along. So now let's next turn to the outgrowth of this, starting with Newton's color circle, which has been somewhat modified in the manner that I had just shown you. And as a result of all this, a number of competing theories have emerged. And I'm going to talk about two of them. The first one is the famous Young-Helmholtz theory. Young initially, and he collaborated with him many years later, with Helmholtz, came up with the idea that you could experience colors by just having three types of cones that are broadly tuned. And so he said there are three types of broadly tuned color receptors. The color experience is the product of the relative degree of activation. Now, that's a fantastic theory. But there's a big problem with it. The big problem with that theory is that he doesn't explain Grassmann's laws. Remember what Grassmann's law is? That if you mix things along the cardinal axes, you get white. And you only get other colors when you mix them not along the axes. So that became a problem. And because of that, another famous person, Herring, came up with an alternate theory. He came up with a theory which said that color opponency is based on the observation that red and green, as well as blue and yellow, are mutually exclusive, just as I had said. The nervous system probably treats red/green and blue/yellow as antagonistic pairs, with a third pair being black and white. That's where the third dimension comes in. So therefore he argued that we need something like color opponency to be able to see colors right. Now, the interesting thing about this is that he became very famous coming up with this incredible theory. But then if you go back in history, you find that Leonardo da Vinci had this same idea many, many, many years before that. And this is from his autobiography, with a very poor translation. It says, "Of different colors equally perfect, that will appear most excellent, which is seen near its direct contrary, blue near yellow, green near red, because each color is seen, when opposed to its contrary, than any other similar to it." It's not written in English really, but you get the idea. So we have a major two theories. And then numerous experiments subsequently emerge, especially when it became possible to record the neural activity of cells to determine to what degree these theories are correct. And of course, the first correct aspect of both theories was that indeed, we have three types of cones that are selected to red, green, and blue, and that they are broadly tuned. So to now understand better how it really happens in the nervous system, let's take a look at the basic physiology of color processing. I showed you this slide once before. I pointed out to you that contrary to red and green cones, blue cones are much less numerous. Only one out of eight are blue. And furthermore, if you look at this in the retinal surfaces in the fovea area, there appear to be very few blue ones in the fovea itself. So the blue cones are less numerous. And consequently, it became a puzzle of how do they contribute to color vision. Now, to exemplify this further, let me show you a slide here. Here what we vary is a spatial frequency. And what you can see is-- I think most of you probably don't see anything here. But most of you probably still see this. This is the same spatial frequency as this. But this activates the blue cones mostly. And this activates your red cones. And you can see that your acuity is much, much lower when you only have your blue cones available. And that's because only one out of eight blue cones exist in the retina. So there's this very, very clear distinction. So now let's talk about the photoreceptors. Here we have an absorption spectrum, or I should say a series of absorption spectra, for the four kinds of photoreceptors. And the fourth one is your rods. So what you see here-- and each of them are fairly broadly tuned. Here we have nanometers. And I'm sure all of you know this already, that 1 nanometer is a billionth of a meter. So we're talking about incredibly, incredibly high frequencies. So that the important thing to remember here is that each of these cones is fairly broadly tuned. And so consequently, any light that comes into the eye tends to activate all of the cones unless they're at the very extremes. . And so indeed, as Young and Helmholtz had proposed, somehow we have to derive our color experience from the relative amount of activity from these different cone types. But then as I've mentioned to you, Herring felt that that was not sufficient to explain our color abilities. And so what he did then is to move on and had people, especially much more recently, examine more closely what the center-surround organization is of the different midget cells in the retina. And if you remember, initially I told you that the prime theory was that the center comes up, is comprised in the central retina of just a single cone and the surround of its color opponent cone. But then, when people began to study this very carefully using a combination of recordings and anatomy, they found that actually the surround is not specific to one type of cone input. It's mixed. And so then people began to model that. And they found that this arrangement is almost as good as this arrangement. And this is the truth, actually. That's how it is. And so then, just to remind you, the parasol cells have a mixture of these inputs, both in the center and the surround. And as we had discussed, the parasol system cannot tell you about colors. This I mentioned to you before. The midget system gives a more sustained response than the parasol system. So now I also showed you this diagram. And we established that the green and the red cones each give rise to an on and off system at the level of the bipolar cells, and then give rise to the on and off ganglion cells. The red and green on and off ganglion cells. Now, the blue system is more complicated because if you would have had four different kinds of cones, meaning a blue one and a yellow one, then probably would have had a similar arrangement. But nature somehow had failed to create a yellow cone because it felt it was not necessary-- because if you have an equal mix of red and green, you get yellow, because remember, they're not on opposites. Sorry. They're opposite. But if you mix them, you can create an impression of yellow because they're not along the exact axes for the colors themselves. So the argument was therefore that you must have blue on and blue/yellow on ganglion cells. But whether this is really the case is still to some degree debated. People have done a lot of recordings. And much of this was done not only in the retina, but also your lateral geniculate nucleus. And so the question was, let's just find out what is the color tuning of cells in the lateral geniculate nucleus to understand this. And to do this, when we went back to the color circle and presented stimuli along the color circle through the receptive fields of these cells to see how they responded. And here's an example of a so-called blue ON cell. It shows here you [INAUDIBLE] the cell is tuned. Sharply tuned, mostly to 90 degrees. The yellow cell is the opposite. And green OFF cell here and a green ON cell here. So what happens then is when you take a lot of sample of these, huge sample of them, what you find is that in the lateral geniculate nucleus, you don't get any cells that are at the diagonals. All the cells fall into these major four categories along your cardinal axes. And then if you take a big huge sample, you come up with the following summary-- that you have red ON cells, red OFF cells. You have green ON cells and you have green OFF cells. And at least some people claim that you only have blue ON and Yellow ONs. You don't have blue OFFs and yellow OFFs. That is still under debate. But certainly, if you do extensive recordings, if you find any OFF blue and yellow OFFs, they're extremely rare if they exist at all. So now to try to gain yet further understanding of the role various areas play in color vision, is that one can make lesions. And I've already told you what happens when you make either parvocellular or magnocellular lesion. That blocks the parasol or the midget systems. And that the midget system is essential for color vision. And then I showed you this. Here is a geniculate. If you take out this area, you block the midget system. And if you take out this area, you block the parasol system. So if you do that-- here is an overall view of the monkey brain. Here's area V1 Here's the V2. And here's area V4, which I've mentioned to you before had been believed to play a central role in color. So then it becomes important to see just what happens when you make lesions in these areas. In this case, you can make a lesion V4. And then we can examine color discrimination. And I told you in a monkey, what you do is you use an oddity task. After fixation, you can present the odd stimulus either in the intact parts of the visual field or those that had been lesions. This is a high contrast. And if you do that experiment-- I showed you part of this data before. I showed you that after lesion of the parvocellular geniculate, which blocks the midget system, you totally lose the ability to discriminate even these high colors, red, green, and blue. But no deficit arises when you make a lesion in the magnocellular portion of the geniculate. And then the big question became what happens in V4, since that had for decades been declared to be a color area. And what was surprising about that is that after V4 lesion, there was only a small deficit in color for this high-contrast stimuli. So therefore people have to go on and do a more careful detailed study to see what happens if you use less saturated colors. Remember that the degree of saturation is one of the important factors for analyzing color. And here we have an example of a very low saturation. And here we have a somewhat high saturation. And you can vary that systematically and see what happens after V4 lesion and after other kinds of lesions. And here's an example of what happens in a V4 lesion. And this is what happens in an MT lesion. It shows no deficit at all with an MT lesion, indicating that MT does not play a crucial role in analyzing color. We do get this significant deficit here, but it's a small one. So even at low saturations, the monkey still can do colors reasonably well. So it's not like V4 is the color area. There apparently are several areas in the brain that can process color. And V4 does that in addition to performing several other analyses that we will discuss in a couple of sessions from now. So now we're going to turn to another very interesting phenomenon, which is what is called isoluminance. What is isoluminance? Isoluminance is the presentation of different colors that have the same illumination level. So I'm going to give you a few examples of this. If you look at this-- how many of you can read those words there? Pretty tough, isn't it? This can be set up in such a way that it's even worse. But because of the light bulb in there, it's not really perfect. But just to give you a sense of it obviously goes to a lot easier to read. So that's because this is close to isoluminance, at which our ability to see objects is much impeded. Not eliminated, but impeded. So what can you do to do this kind of experiment systematically? What you can do here is you can use various capabilities. Stereopsis, motion parallax-- just motion-- and texture. And if you do that, and you vary the red/green luminance ratio, you can see that there's a dramatic drop off at around very close to a luminous ratio of one and one. So indeed, our ability to process information in the absence of luminance information is greatly compromised. So then the question came up-- I mean, we're talking about here such things as motion perception, texture perception, and stereopsis. And as we had discussed before, we have already established that texture and stereopsis are processed to a large extent by the midget system. And that motion to a large extent is processed by the parasol system. But all three of these types of capabilities are compromised isoluminance So a series of experiments had been carried out in which people argued that when you present stimuli at isoluminance, you render the parasol system unresponsive because it gets equal input in the center-surround, some red and green cones and blue cones. So therefore, they concluded, that if there's a deficit in performance like this, that must reflect the fact that the parasol system plays an important role in the analysis. But as I told you before, stereopsis is processed predominantly by the midget system and so is texture. So that raised quite a problem. And so people began to run experiments in which they recorded from the parasol system to see what happens actually to unit responses when you are at isoluminance. And that resulted in quite a surprise. So here is an example of a magnocellular cell, meaning a cell that gets input from the parasol system of the retina. And you alternate between red and green here repeatedly, collect the data, and you vary the red/green ratio. And you can see here that throughout the whole thing, the cell continues to respond. So that became quite a puzzle. We were not able to silence the magnocellular system at isoluminance. So the question is, how come? Well, the answer is, as you had seen before, that the parasol system is extremely sensitive. It gets input even in the center of the receptive field of each cell for many of-- I should maybe say several-- different cones. And because of that, there's much more information and excitation coming into those cells than into those that are in the midget system that most of which only get input from a single cone for the center. So that being the case then, we can move on and ask a question of what about if you do the same kind of experiment in area MT. Now, who remembers? Area MT gets most of its input from which system? Good. Mostly from the parasol system going through the magnocellular layers. So now you can record in the area MT and do the same kind of experiment I just described to you the retina in the lateral geniculate nucleus. And so what you can do here is you have a monkey. The monkey fixates. And you move a bar of light across the receptor field back and forth which is isoluminant-- red/green in this case. And then you see how well the cell responds. And then for comparison what you can do is you can use a luminance grading, bright and dark in the background, to which we know that the parasol cells and the cells in the area MT respond vigorously. So now we are going to compare what the difference is between these two conditions, meaning luminance grading as opposed to an isoluminant color grading. So if you do that experiment, you're in for surprise. Here's an example of one cell in which there's a much more vigorous response to luminance. Here are the various luminance contrasts. And here is a chrominance one. And the cell doesn't respond as well here, but it still responds reasonably well. Then you take another cell and you get the opposite. And this particular cell, still in area MT, responds a bit more vigorously to chrominance than to luminance. And so if you add this all up and record for many, many cells, you find that the cells in area MT, just like in the lateral geniculate nucleus and in the retina, respond quite well at isoluminance-- the parasol cells and the same cells, of course, in the lateral geniculate nucleus, and in area MT. So this area MT is one that responds surprisingly well to anything that's out there that results in a change, whether the change is produced by virtue of chrominance or by virtue of luminance. And that, in fact, is one of the very important attributes of the parasol system. Namely, and that's so numerous in the periphery, is to be able to detect just about anything that happens-- motion, flicker, just the onset of a single stimulus, whatever. But that system is very sensitive and can tell something has happened there. That's what that system is very good for. It's not that good, obviously, for seeing very, very fine detail. But it's very sensitive. And it's very, very good for detecting motion and appearances. So now we're going to move on and talk about a topic that I'm sure many of you have an interest in. And that has to do with deficiencies in color, often referred to as color blindness. And the first fact is that if you look at the incidence of color deficits in humans, 8 out of 100 males among Caucasians, 5 in a 100 in Asians, 3 in a 100 in Africans. In females, it's much, much less. It's 10 times less frequent. But still overall, that's quite a number of people who have some sort of deficiency in color. So now that we know that, we can ask the next question-- what kinds of color deficits can we denote? And the type, these are given fancy names, which are called protanopes, deuteranopes, and tritanopes. And that simply refers to the fact that protanopes lack long-wavelength cones, which are of course the red cones. The deuteranopes lack medium-wavelength cones, if you talk green ones. And the tritanopes lack the short-wavelength cones, which are the blue ones. So that is the basic types of deficits. Now, some people have a combination of these. Some people have no color vision at all, but that's very rare. Quite common are these three types. You somehow don't have one particular kind of cone or you have very few of them, or you have them, but they don't function right. So now, how do we establish our ability to see colors and whether we have normal color vision? Now, that's very interesting. A number of tests have been developed. And the most famous of those, the oldest one-- let me go back, sorry-- are the so-called Ishihara plates. And the next one's the Farnsworth-Munsell Hue Test. And the third one I was going to tell you about is the dynamic computer test. So let's look at the Ishihara plates If you look at that, how many of you can see what is written there? What is it? AUDIENCE: Eight. PROFESSOR: Eight. Very good. Anybody who doesn't see it? OK, you don't see that. We'll get back to you in a minute. So now another test is a dynamic one. The reason for using a dynamic test is that the so-called isoluminant point of individuals is not the same. Expect a lot of variations from person to person. And so this test is the dynamic one. They presented for the background, as you can see here, different luminance levels in gray. And when the computer starts running, these keep exchanging each other in randomized fashions. It's the dynamic view. And then you have a central area here. Everybody can read this, right? What's the word? AUDIENCE: Lite. PROFESSOR: Lite. Very good. So now instead of presenting these letters here in a high brightness overall, we can present them in color. So I'm now going to show you an easy test. Anybody can read this? Anybody who cannot read it? Can you read it? AUDIENCE: Ish. PROFESSOR: Ish. So what's the word? AUDIENCE: MIT. PROFESSOR: MIT. What's that? So now I'm going to make it more difficult. You guys ready? What is this one? AUDIENCE: Fit. PROFESSOR: Are you having a-- AUDIENCE: Yeah, it's a little harder. PROFESSOR: You're having a fit, huh? Can you read that one? AUDIENCE: No, I can't. PROFESSOR: No. So now it seems like we do have one person here who has perhaps a mild color deficiency. And so now the question comes up, even if all of you want to do this, what can you do to test yourself? Well, so let me tell you about that. There is a so-called Farnsworth-Munsell color test. So what you want to do here is-- can you read this down here? Get on Google and just type in Farnsworth-Munsell color test online. If you type that in, all kinds of things come up. Click on the topmost one. And then what you see here is a set of colors. Actually, there are four sets. And each of these has 20. And I just drew a few of them in. And your task is then-- each of these can be moved-- to arrange them in an order, going from this color to that color in order. So you do that for all four of them. And after did that, you can click on the bottom. And it will tell you what your score is for each of these. And so if your color vision is very good, it gives you sort of a set of histograms. And if the histogram is very, very low, then you're good. And if it's high, then you're not. What you can actually do is when this first comes on, these are in random order. Your task is to put them in order. But at the bottom here, it says score. So if you click on score, it will give you the histograms uncorrected. And they're all going to be high. Then you do this work. And then you click on it again and see how good your color vision is based on that. It's a bit time-consuming. But if you are interested in getting a sense of just how good your color vision is, this is a rather good test, which is readily available on the internet. Does anybody have any questions about this portion? Good. So now as a result of this, we're going to move on and spend the remainder of our time talking about adaptation because that is pretty closely relevant, as we shall see, to color vision as well. So I'm going to talk about adaptation. First of all, again we come up with a number of basic facts. Now let me at this point interject and just tell you that all the material I'm talking about today will be posted on Stellar. What's on Stellar now is not an updated version. But what I'm talking about today will be on Stellar I think by tomorrow. So basic facts. We talk about overall levels of illumination. That's what's so remarkable about the visual system. It's actually unbelievable. 10 log units overall. But the reflected light varies over much smaller range. In other words, you don't want to look at directly at light. The reflected light varies only about 20 fold. So now the question is, how do we handle this? Well, it turns out that the pupil, which does play a role in this, can only adjust over a range of 16 to 1. So that's a long cry from 10 log units. And so much of the adaptation I would say, even more so the adaptation that takes place, occurs in your photoreceptors. So here I'm saying this again-- most light adaptation takes place in the photoreceptors. Now, how does it take place? Well, the way it takes place is that-- I mentioned this to you before-- the photoreceptor molecules, like rhodopsin in the rods, comes in two basic forms. You don't need to know the chemistry of it, but you make two or simply that it comes in two forms. We can call it bleached and unbleached. Some people call it open and closed. But let's call it bleached and unbleached. This is very dynamic process. At any level of illumination a certain percentage of molecules is bleached in each receptor and a certain number is not bleached. And the brighter the illumination, the more are bleached. It's dynamic, which I mean is that the molecules constantly keep changing. So it's the overall percentage of the ratios between the beached and the unbleached. So that means that any increase in the rate of which quanta is delivered to the eye results in a proportional decrease in the number of pigment molecules available to absorb those quanta because they are bleached. Now, this arrangement happens to be extremely clever. And this is reflected in the fact that the retinal ganglion cells are sensitive to local contrast differences. Remember, I told you there's a center-surround organization. This is one of the prime reasons we have that. The overwhelming majority, like 95% of the cells in the retina, the retinal ganglion cells, respond to contrast differences, not to absolute levels of illumination. And that's how we often talk about contrast. And if you remember what the contrast formula is, it's the contrast level of the stimulus itself and the contrast level of the background. You subtract one from the other, divided by the sum of the two, multiplied by 100. I showed that to you before. So now here we are. If you talk about light and dark adaptation, this is the basic outline of the retinal connections that we have talked about. We have presented this several times before. And then if you are light adapted, you essentially have non-functional rods, so I took them off here. But then when you darken that, the opposite happens. And you have the rods active. But they all fit into the same ganglion cells. And that's why at night, the receptor fields are bigger. And you don't see color at night because this is what the picture is. Now what you can do is let's ask the question, how do the neurons, like the cells in the retina, the retinal ganglion cells how do they fire at different levels of illumination? And that's quite an interesting story and a very straightforward one. Here we have a cell that had been adapted to these different levels of background illumination. At minus 5, your rods are functional. And what is important to see here is that as you change quite dramatically the background level, the eye adapts. And I should say that it's the photoreceptors predominately that do so. And what they look at are local differences. And so each of these then sees a fresh, the contrast that is created, rather than looking at absolute illumination levels. And that's what you want, of course. You want to be able to drive well at night. You want to be able to drive well in the daytime. And by having this system, you can look at predominantly at contrast differences rather than absolute levels of contrast. So that's the arrangement here. And now what we are going to do is to move on and talk about the so-called after-effects of adaptation. So let me tell you how this initially was done. People ask the question, what happens if I fix something on your retina for a period of time? If that previous set of data that I've shown you is correct, if I present something to the retina and leave it there, pretty soon you won't see anything. A famous series of experiments was done, by now many, many years ago, in which they-- very clever experiment-- had subjects lie down. And they put a contact lens in the eye. And in the contact lens, they put a miniature projector on, which meant that when you turn that projector light on, it went to a fixed position on the retinal surface. Why was this necessary? Well, the reason it happened is because it was discovered that your eye actually is not really stable on purpose. Your eye ha a so-called eye tremor. And of course, you move your eyes all the time. So this procedure of having a contact lens with a projector attached to it kind of got rid of the eye tremor. So when they did that, they found that, depending on the contrast, in a matter of a minute or less, maybe even 30 seconds, you would stop seeing what was presented to the eye because you then change the adaptation level in your photoreceptors. So then subsequently, people had a clever idea. Said, we don't need to go through this incredible trouble of having to have people with contact lenses and a projector, and having them lie down because that's the only way it would work. We can do this much more simply. And so to do that, I'm going to have a demonstration here. What I would like each of you to do is to-- you see it's a light spot here and dark spot there. So I want you to fixate here and count to about 30. And be very relaxed about it. This is a Gaussian. Therefore there's no sharp edge and the eye tremor doesn't matter. Then after you counted to 30, shift your gaze to the bottom one. The first thing that happens if you keep looking at it, the two on the top here disappear if you fixate very tightly. And once they disappear, then you can look down. Everybody see it disappear? Good. And what happened when you looked at the bottom? AUDIENCE: [INAUDIBLE]. PROFESSOR: You got a reversal, right? You got a dark spot here and a light spot here. And you say oh my god, what's going on here? So now let's do another experiment. I want you to do this again. But what I want you to do is to cover one eye up and then do it again. Count to 30. And after you did so, make [INAUDIBLE] down the bottom, but switch your eye. And if you switch the eye, cover the one that you looked at and uncover the one that you didn't look at. And if you do that, you won't get any effect. So what does that prove? That prove that this is happening in the retina. Everybody agree? Very clear cut. So what's going on here? So let's diagram this. Here we have the situation. You turned on these two stimuli. And then when we turn it down, initially their sensitivity of your photoreceptors is pretty much the same because it was a homogeneous background. Then after you looked at this for awhile-- no, one more thing. In this case, the ON cells fired, and in this case, the OFF cells fired, saying, oh, dark spot, oh, light spot. Now, if you keep looking at this for awhile, what happens is it begins to disappear. When it does, what happens is you don't see anything and what happens also is that the sensitivity here decreases for white light and [INAUDIBLE] increases here for white light. So therefore, what happens is once you adapted to this, there is no response in the ganglion cells. Then for the third step. When you've made yourself look down, there were homogeneous background. But this region is less and this region's more sensitive. And so therefore the photons come into your eye from those two regions hit more and less sensitive regions on the rental surface, thereby activating the opposite cells-- here activating the OFF cells, here activating the ON cells. Sorry-- right, yeah. We got a reversal. So that explains why you see the after image. So now we're going to continue and get back to the question of the color circle, see what happens with after images with color. So this color circle is unbelievably powerful because it explains the after images that you see with color. And I will come back to this circle. But what I want you to do now again is to fixate here, count until, again, about 30, and then fixate on the bottom. So if you do that, I think most of you should see, again, a reversal. Here you would see some reddish. Here you would see something greenish. Does everybody see that? Do you see that too? Good. So you have a very minor color deficit. So that's the case here. So now let me show you another one. And again, do the same experiment. And what you see here at the after images you get-- whoops. Sorry. Let me go back. The after images you get here are not going to be the complimentaries here. So what's going on? So let me explain it to you. Here again is a color circle. And here the prime axes. It turns out that if you adapt to this level here, the rule of the color circle says if you go across, this is the after image you're going to see. And if you adapt to here, this is the after image you're going to see. So let me draw that up. It looks like that. And that is true everywhere, as long as you go across the axis. You could do this horizontally or even diagonally, and you get this reversal. So it says that an after image can be perfectly predicted by the rules of the color circle. It's on the opposite side of the image that you look at, going across the center along the cardinal axis. But now if you do the diagonals, which I showed you before, when it didn't match, what we have here, you have this and this. Then the after image is this and that. And so we don't have a correspondence, of course, because you're not along the axes that would predict this. This one gives you that and this one give you that. So if I did this and this to begin with, it would be the same as that, but rotated. So that then clearly enables us to use the color circle to predict not only some of the basic effects of what colors we see, but also to tell you exactly what kinds of after images you get. Quite remarkable. So now to drive this home once more. This doesn't work too well because the colors are crummy here, but we can try it. Everybody agree this is sort of a more or less black and white display-- a little bluish, unfortunately, here-- of a beautiful castle? See that fixation point here? What I want you to do here is to fixate here, again, count to 30, and then I'll switch back to it. And if this were to work right, you would see the original black and white image in color. Keep fixating. Count to 20 more. And then I'll switch back. Did you see the colors? Yeah. So this is a very clever demo I found. I'm afraid I don't remember the person's name who came up with this one. But the essence of it is, again, that indeed what is happening is that you're creating an after effect due to adaptation in the retina. And when you do this very cleverly, like this particular castle picture, you can actually create an artificial impression of colors which are in consonance with what the real colors would look like in a black and white picture. So that then is the essence of what I wanted to cover about adaptation. And we can now come and summarize what I had covered today. So first of all, I told you that there are three qualities of color-- hue, brightness, and saturation. And just to go back-- hang on for a minute. I want to go back once more to make this clear, which I forgot to mention. If you go around the circle, you change hue. And if you go from the periphery to the center, you change saturation. And the center here, if this color circle were 100% correct, this would be a white area. Now, the basic rules of color vision are explained by the color circle, as we have amply seen. There's something probably-- I have a color circle up on my wall in my office because I'm always fascinated by this, even though I've been doing it for years. So the three photoreceptors we talked about in humans and in primates, the red, green, and blue, which shouldn't be called that. People get mad when you do that, even though that's the way I call it. But people want to call them short, medium, and long-wavelength cones. They are broadly tuned, as I had shown you in those spectrograms. The color-opponent midget retinal ganglion cells form two cardinal axes, the red/green and the blue/yellow. And those, if you remember I told you at the level of the retinal ganglion cells and at the level of the lateral geniculate nucleus, fall into these two major categories. We won't have any under diagonals. So for us to be properly able to see diagonals, that's done somewhere in the cortex. It's not done at the level of the retina or the lateral geniculate nucleus. Now, I also pointed out to you several times already that the midget system's essential for color discrimination. And the parasol cells can see stimuli even at isoluminance. They just cannot say what the color is. They don't, quotes, "perceive" different colors. But they can see any kind of change that occurs in the environment, even when it occurs at isoluminance. Color is processed in many cortical areas, lesions to any single extrastriate area fails to eliminate the processing of chrominance information. It can reduce it, but it doesn't block it out. That's true for many things. So the cortical areas are very, very complex. And they do interactive analysis for many different attributes, including color. However, I can add here that does not apply to area MT because MT does not seem to be specializing in color because lesions there, you [INAUDIBLE] any deficit. But there's several other visual areas. We went through that, V3, and [INAUDIBLE] cortex, and so on. These areas contribute to the processing of color. Now, the perception of isoluminance is categorized for all categories of vision. It's not selected to only those that are processed by the midget or the parasol systems. All aspects of vision-- and the three I showed you was, stereopsis, motion, and what was the third one? AUDIENCE: Texture. PROFESSOR: Texture, right. So all three of those are compromised when stimuli are presented at isoluminance. The most significant aspect of luminance adaptation occurs in the photoreceptors. And it's explainable by the relative number at any given level of adaptation of bleached and unbleached photoreceptor molecules, as in the case of rhodopsin. Lastly, after images are a product of photoreceptor adaptation and their subsequent response to the incoming light. So that then is the essence of what you wanted to cover today. And I hope you did find this interesting, because certainly our ability to see color is quite a remarkable thing. It's amazing to get a sense of how the nervous system does that, even though at this stage, we are still at a fairly early level of having gained full understanding of it. Now the next time, we are going to move on to another fascinating topic, at least for me fascinating, which is depth perception, which I had mentioned before is a remarkable achievement because images fall on a two-dimensional retinal surface. And from that, the third dimension has to be reconstructed. And how that is done, we are going to discuss the next time. Now let's make sure that all of you had signed attendance. And the next thing is if any of you have any questions, I will be happy to try to answer them. So once again, I'm crystal clear, huh? Well, thank you very much for attending. And I do hope that your knowledge has increased a bit about how we process color information.
MIT_904_Sensory_Systems_Fall_2013
9_Illusions_and_visual_prosthesis.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. Good afternoon, everyone. Today, we are going to have two topics. In the first half we are going to discuss visual illusions, and in the second half we are going to talk about prosthetic devices for the blind. What I would like to start with is, again, to send around the attendance sheet, so please put your name to it. And then, in addition, I'm going to send around some handouts, one for each person. I'm going to hand them out in both directions. Thank you. The handouts we're going to look at, I'll refer to them specifically as we go around. And I also will show pretty much the same handouts, figures in the handouts, on the screen. All right. So for the first half of the presentation today we are going to talk about visual illusions. Now, the main emphasis of what I'm going to talk about, it'll be an attempt to convey to you what kinds of procedures one can use to get a better sense of why we see in illusions what we see. Some of these approaches are strictly behavioral. And some of them have involved experiments carried out on a more thorough basis using electrophysiological procedures and so on. But those I'm not going to talk too much about. But I will relate what we can figure out through how the visual system works. Now, because of that, there are hundreds of illusions, and I will emphasize those that allow us to think about why we see what we see. Now, one of these illusory effects is the so-called Hermann grid illusion, and that's the one I'm going to start with because it's a fascinating illusion. If you look at this, what you see is that at the intersections you see sort of smudges. And so the big question arose as to why we see these smudges. Now, does everybody see those smudges? If you look at the handouts, I also show you-- pay attention only to the front page, the one on the left, which says the Hermann grid illusion. So if you look at that either on your handout or you look at it on the screen here, you see these smudges at the intersections, not where you're looking at, but in the smudges other than where you're fixating. Does everybody see those smudges? OK. So the question arose why we have these smudges. And back in the 1960s, an investigator came up with a rather clever idea of why we see these smudges. And the person who came up with this theory was a fellow called Baumgartner. And so what he proposed, that this illusory effect is not directly attributable to the center-surround organization or the receptive fields or retinal ganglion cells. So what he proposed is that when a retinal ganglion cell falls in the center like this, there's more inhibition in the surround than when it falls along not the intersections, but along the straight lines. And because of that, there's more inhibition here than here, and that's why we see the smudges. Now, that was a very attractive theory. And in fact, it has appeared and still exists in many textbooks that you read about illusions and about vision in general. Now, OK, so here is the exact description that I just made to tell you this. And of course, you would expect that if this is the case, you should also get the smudges if you'd reverse the contrast. And in fact, that is true. So here, once again, you see the smudges, maybe not quite as sharp. But in this case, it's whiter smudges, whereas in the previous case, this one, you have darker smudges. So that kind of fits with that basic idea. But then, subsequently, people began to play around with various conditions to see when you do and when you don't get the smudge effect and to see whether that would fit with this theory. Now, that's the basic tenet in just about everything that psychologists and neuroscientists working in the visual system do. They try to critically examine various hypotheses. And doesn't have to be just illusions, it can be a whole bunch of other things. And they do this then by systematically manipulating things in experiments that can present these things just under free viewing conditions, as I'm doing it in terms of these handouts, or you can do it under more controlled conditions. The subjects can look through a tachistoscope or something like that. All right. So anyway, let's now look at this in a bit more detail and see what's wrong with this theory. Well, first of all, if you look at this figure, what you can see here that you can see the smudges irrespective of the size of the overall display. Now, that's a problem because, of course, the size of the receptive fields in the retina, in their center-surround arrangement, is constant. So if it were the sharp arrangement that we described about the way the center and the surround are activated by the intersections and the non-intersections in the display, that should not apply to different sizes. It should only work then for a particular size where you approximate that. Now, I will elaborate on that in more detail in just a minute. Now, another bug that arose here is that if you now turn this display just 45 degrees, all right, so that it's a diamond rather than a square, the illusory effect has declined considerably. And then to make this even more problematic, and this appears in your handout on that backside of the first page, what you can see here is that when you make the lines arranged in such a fashion at slight angles to each other, then you don't see the illusory effect at all. Whereas here, we also see it a little bit less but still slightly visible. But here it's not visible at all. And you can compare that even better on the handout there. So the question is, why would that be? Well, we give you another example here. If you make serrated edges-- I don't know if the people in the back can see the serrations. Those of you in front can see the serrations. And when you do that, also you fail to see it. And the problem here, of course, is that if you make these slight changes in the angles of the intersecting lines or you make them serrated, the center-surround antagonism should be the same. And yet such a small change causes the illusory effect to disappear. So that then, in itself, questions the Baumgartner theory. Now, there are several other factors that enter into this. This is a very subtle one. If you look at this side, OK, we have different shades for the lines. OK? On this side, we have the darkened lines, OK, in front. And in this case, they have them in the back. And here you see the illusory effect, and here you don't. And once again, how can that be if the illusory effect is due to the center-surround antagonism? Even more dramatic is when you do this with colors. Here you have, again, the colored lines in front, and here you have them in the back. And here you can see homonymous shades, all right? This is darker yellow and so on. But it's the same color but a little bit darker at the intersections. So here you get a pretty good illusory effect. But once those white lines are in front, we get very little effect at all. So that then denotes a number of problems with the theory. And then one more fact is that when you present the stimuli at isoluminance, you don't get an effect at all. So that then raised some serious questions about the illusory effect. And then a quantitative study was undertaken to see just what is the layout of the receptive fields in this display. And so here what we have is a display. This square here is exactly five degrees from your fixation, OK, and you can calculate the number of midget and parasol cells that are activated by that square area. And you can see in total there are more than 360 cells that are being activated in that little region. And here is a display, in this case, of only the ON cells, and you can double that by having the OFF cells in the same location. Saying that there's just no good correspondence whatsoever in the number of ganglion cells that are being activated and how the center-surround is truly laid out at a point where you get a very good smudge effect with this illusion. So this then, taken all together, raises the issue then if indeed there's a retina phenomenon, why is there no fit here? And so it does look like that it's unlikely that this illusion can be explained by the center-surround antagonism theory. So then another theory was advanced claiming that this doesn't take place in the retina, but instead that it takes place in the cortex. And so another theory that was advanced is saying that this happens in the cortex due to the fact that you have these simple cells-- we talked about those before-- that had these elongated inhibitory, supposedly inhibitory, surrounds. And if one calculates this, that can explain the illusion a little bit better, especially since these cells, even at any given point in the visual cortex, come in several different sizes. So that's an alternate theory. But still, further research needs to be done to be able to verify whether this theory is more valid than the center-surround theory in the retina, which certainly is totally wrong. So I think maybe in another 100 years, they're going to drop the Baumgartner theory out from textbooks and general reviews of illusions and vision. OK. So now I want to show you one more illusory effect, which is a variant of the illusion that I have just shown you. And this one is something that can drive you nuts, I suppose. And that appears on the second front page, the second sheet, I should say, of the handout. You can look at it there, or you can look at it here. And you can see this scintillating effect, and therefore it has been called the scintillating grid illusion. It was invented by fellow called Lingelbach. All right? That's a very strong illusory effect, very dramatic. And whether this can be explained by any particular theory about the visual cortex still remains to be settled. Right now I don't think there's a specific theory that can explain this. Now, I will now move on and talk about yet another illusory effect, or actually a set of them, that has a much better explanation. This one we don't have a handout for. If you look at this display here, you'll agree that the left and right bars here are identical, right, going in shade from light to dark. All right? So now what I'm going to do here-- what I want to do is to present these two images here. I want you to fixate in the fixation spot, and we'll count to about, I don't know, 20, say. Keep fixating. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. 1, 2, 3, 4, 5, 6, 7, 8, 9, 20. And then I'm going to go back here, and you can see that this side is much darker than that one. Everybody got that effect? AUDIENCE: Yeah. PROFESSOR: I'll do it once more. Pretty dramatic, huh? OK. Now I'll tell you what I'm going to do. What I want you to do is to look at this with one eye. OK? Don't look at it yet. Wait a while. Just let your eye get settled. You're going to look at it with one eye. And then when I switch over, I want you to switch eyes. OK? All right, ready? Go ahead now and fixate, and we'll count to 20 again. 5, 6, 7, 8, 9, 10. 1, 2, 3, 4, 5, 6, 7, 8, 9, 20. Switch. And if you do that, you don't have an effect. OK? So what does that prove? Well, that obviously proves-- that's why it's nice to do these little, itty-bitty experiments one can do in minutes-- it proves that this is a retinal effect. And the effect is actually due to adaptation that we have talked about, when was it, a couple of times ago. Now, I want to show you yet another effect that has a similar arrangement. What I'm going to do here is I'm going to put this display on. And what I'm going to do here is I'm going to turn these off successively as we go around the clock. OK? And what I want you to do, again, is to fixate here on the cross, OK, and then I'm going to start running it. And what are you going to see? A pretty amazing thing you're going to see. What do you see? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah? You see a green spot going around? Everybody see that? OK. So now the big question comes up, why on earth do we see this green spot? What is the explanation? Well, this again is a retinal effect. And to test this in more detail, we have to go back to what I told you about color vision. And when I told you about color vision-- if you remember we did that experiment with Gaussians-- that if you look at the color circle, all right, and then you adapt to a stimulus here, which is pretty similar to what was on the screen, then the afterimage is on the opposite side cutting across the color circle. So the afterimages that you see that are in color can be perfectly explained by the color circle. If you adapt to this, this is the afterimage. You adapt to this, this is the afterimage. It's always straight across the center line, center point, I should say, where you have the white center, which perfectly explains your afterimage. So what I'll do again, I'll show this to you just once more. OK, here we go, ready? You can experience it again just to sort of get a sense. It's amazing how the green circle becomes more and more and more and more vivid. Now, one more thing I'm going to tell you is when I turn this off-- keep fixating-- and then you're going to see a green afterimage, right? So it proves that, indeed, it's the afterimage that appears over a period of time with that rotating circle, and it's fully explained by the color circle. All right. So now we're going to move on, and you look at yet another illusory effect. If you look at this display, what do you see here? You see a light square and a dark square, right? Everybody see that? Now, what if I told you that what you're seeing is a total misconception? So I'll tell you what I'm going to do. I'm going to leave these two squares exactly as they are, but I'm going to change the background. Ready? What's the difference here? The difference is that now the background is uniform, right? And you can see these two squares are indeed identical in contrast, right? But now if I go back to the previous one, if you look at it more carefully you can see this goes from light to dark. Of course, the projector makes this look bluish. It shouldn't be, but that's OK. And it's a gradual shift in the illumination level that fools you to think that one is light, the other is dark. And that has to do heavily with the fact that the receptive fields of cells in the retina have center-surround organization, and they're sensitive to local contrast differences. That's their make-up. And so until I point it out to you, can't even tell that this is a gradual shift. And if it did not change in color, it would be harder to tell. And it has to do with the fact that you're making local comparisons. And because of that, to make a distant comparison you are inaccurate about it. So this illusory effect, again, can be readily explained on the basis of what we know about the center-surround organization of receptive fields in the retina. OK. Now, if you look at the same thing in color, which doesn't come up too well here because of the projector, but still you can tell that these are not the same color, OK, again because of the gradual shift, color shift, in the background. But exactly identical they are, as they had been before. So this works not only for contrast, but it also works for color. Now, some illusory effects don't work for color and some do. All right. So now yet another related effect that I'm sure some of you have seen before is if you look at this circle, we agree that it's uniform in color. Now, if I cover half of it with dark and the other with light, it still looks pretty the same. Maybe it's a little darker here than here. But then if I put a partition here, as you can see here, then this side definitely looks darker than that side. OK? So this, again, looks at contrast effects, and we'll talk about these contrasts in just a bit with more detail. Next, let me turn to another illusory effect that you can see in many, many textbooks, which is called the wallpaper illusion. If you look at this-- I'll come to it in a second in your display, where it will appear on the back of the second page-- what you see here, it looks like that these are not parallel lines here. I shouldn't say lines, but parallel black and white squares. It looks like that they're going like back and forth like that, OK, and it's quite dramatic. And so now we can ask the question, how can we see this? What is this due to? It could be due to the way they're aligned. It could be due to the fact that there is space between them, that gray space. And so one can do experiments like the one you have on your next page there. Let me first-- I will prove that, indeed, we have parallel lines here. These are fully parallel lines, and they go back and forth. To convince you, you can see that the lines are identical, and yet they do not look parallel at all because of this illusory effect. OK. So now what we can do is we can fiddle around with this a bit. Now, that's the kind of thing you guys would probably enjoy doing yourselves-- look at various illusions and start playing around with them. This is an example. This is a basic wallpaper illusion. And now you turn this into an equal checkerboard, OK? And if you keep the lines in between, and you get no effect whatsoever. And then here we eliminate those gray bars, and the illusory effect is greatly reduced. So consequently, the illusory effect has to do with both the fact that you have space between the alternating black and white bars, meaning these gray lines here, and that you have them offset. So when it's not offset, you don't get it. And when you don't have the bars, you don't get it. So these two factors play an important role in giving rise to the illusory effect. Now, what we also can examine is what happens at isoluminance. And in this case, once again, what happens is that the illusory effect is still there, but it is reduced under isoluminant conditions. And this is not the isoluminant actually because of the projector. But when it's perfectly isoluminant, then the illusory effect is largely lost. All right. So now we have a variant of this illusion which is fascinating. And this one I will show you on the next page, OK? And if you look at just this display, and you are convinced that we have lines that are alternating that are filtered back and forth, right? Well, is that really true? Well, it turns out if you take a bunch of red lines here that are perfectly parallel, OK, and then you superimpose those on the display, you can see that actually the display is parallel. OK? The reason it doesn't look parallel is due to the way these individual little pieces that you're not even conscious of are arranged. So let's look at that in more detail. Let me add here one more thing. At isoluminance, you get practically no effect. So these illusions, this one and the previous one, heavily depend on contrast differences. OK. So here we go. Here's the basic illusory effect. And the way this was constructed is shown enlarged on top. Alternate lines have 1, 2, 1, 2, 1, 2, 1. OK? What you have in the top line, and the third, and so on, you have it in this orientation. Here you have them reversed. And that vague cue is what somehow provides the illusory effect for you. Now, if you make all the lines similar, OK, in other words don't reverse the alternate lines, then what you see here is actually the bars are all tilted slightly upward. OK? And because of that, the illusory effect is not as dramatic as this one, but you still have it. All right? So what then you can do with this is to enlarge on this effect as shown here. It looks like that you have somehow a very unequal set of lines, as a result of which you do not have a bunch of clear-cut squares. And if you, again, superimpose true parallel lines through it, you see that they are actually parallel. All right. So that then explains that illusory effect in terms of how we somehow integrate information of shape that we are not even aware of to have a perception of a whole thing. Now, that's not much of an explanation. It doesn't tell you what neurons are involved or anything like that. But it gives you a general concept of how that illusory effect is created. Now we come to yet another fascinating illusory effect, and that one is in the next few pages in the handout. So if you take the backside of page 3, you will see this illusory effect here. Does everybody see that? Now, this illusory effect is one which gives you a sense that you have a whole bunch of swirls. All right? Does everybody see those swirls? Instead of seeing 1, 2, 3, 4 circles, you see some swirls. And you say what on earth is giving us that sensation? So what you want to do then is to look at it analytically. And what you see here is you have alternating black and white squares, first of all. And you say, well, is alternating black and white squares the central point? But then if you look at it more closely, you can see this is-- I don't know how they came up with this one-- what you see here, if you look at each of these, that there's a gradual shift as you go around the circle in the orientation of the square. So this square becomes progressively, very slowly, more rotated. OK? And that goes around the clock, and that's true for every one of those circles. And you can say, well, is it due to the black and white squares? Is it due to the change in the orientation of the squares that you're not even aware of? Or what? So what kind of experiment do you do to check this out, to come up with some understanding? What kind of game can you play? Well, the first game you can play, so you can say, well, let's eliminate these slight changes in orientation. OK? And so what you can do instead, as shown on the next page, is that you have a bunch of circles, black and white circles, which are all the same. And as you can see here, there's no swirls. You have four circles, very clear. So therefore, obviously, the squares, changing their orientation in a subtle fashion going around in the clock, is very important in introducing this illusory effect. Well, then you can say, OK, but is that all of it? And so what we can do here, as shown on the next page, is we can maintain the orientation shift, but we can make them all the same contrast, as we have here. And when you do that, you have have a rather weak illusory effect. There's still a little bit left maybe, but it's kind of weak. And then if you instead make it isoluminant, you get some in-between effect. So on the basis of that, we can conclude that very important in this display is that you have a gradual shift in the orientations of these squares. And that, in turn, suggests that somehow the way these squares activate orientation selective cells in V1 plays an important role in giving rise to this strange illusory effect. OK. So now what we are going to do is to look at yet another illusory effect, which has a different source, and that one is the famous Muller-Lyer illusion. The fact is that these two bars here are identical in size-- do I have the picture-- like that. OK? But when you put these inducing elements, as we will call them, OK, on there, you'll swear that this one is longer than this one is. OK? So now we can ask the question, what kind of manipulations can we undertake to see what that is all about? And let me come back to that in a minute. So what you can do here is to make the inducing elements and the bars different in color or in contrast, and you still get a strong illusory effect. OK? So making them isoluminant or making them opposite in contrast does not eliminate this illusion. So the theory that somebody has come up with is shown here, which I'm not sure I buy, but that's nevertheless a theory, it claims that the brain interprets these two displays as differences in depth, as if here we had a protruding edge in a building inside or a receding edge as shown here. And that difference that we almost unconsciously, if you will, interpret as being in different depths is what gives rise to the illusion according to this theory. Of course, there's a simpler theory simply saying that we just integrate these two bits of information. All right. So now we can ask the question, if you have illusory effects of this sort, is this something that really takes place higher up in the brain, or does it take place in the retina where we cannot separate these inducing and testing elements? Well, the way you can do this-- I'm going to show you another illusion which is similar to this. Here we have the so-called Ebbinghaus illusion. This is a very similar kind of illusion. This is a test element and an inducing element. The test element here are these two circles, and you swear that this one is a smaller circle than that one is, right? And we have a bunch of small circles, and here we have bunch of large circles. So now we can ask, well, is this something that takes place in the retina or what? So what kind of experiment can you do? Well, the experiment that you can do is to present these displays interocularly. What does that mean? What that means is that you present this to the left eye and this to the right eye. And you present this device that's used by psychologists for this kind of thing. It's called a tachistoscope. You look through it. It can present things separately through the two eyes, and you can flash them on briefly so you don't get any binocular rivalry. And when you do that, then you can systematically vary the size of these to make sure that you can get a quantitative measure of it. And when you do that, you find that the illusory effect is just as good under interocular conditions as it is under binocular viewing conditions. Because of this then-- this has been done with many kinds of illusions including the Muller-Lyer one. And when you do that, you show, in essence, that this is an effect that takes place most likely in the cortex, certainly not in the retina or the lateral geniculate nucleus. It takes place someplace where the images from the left and two eyes converge. All right. So now yet another set of illusions that I want to show you. I think that these illusory effects are done. You can take this home and enjoy it or show it to your friends. Now I'm going to show you a set of just two kinds of motion illusions. OK, here's the first one. I want you to look at the screen, and I'm going to flash something on briefly, and I'm going to do it repeatedly. Ready? What do you see? Do you see something moving? A very brief sense of motion that you see. This twirls a little bit. And this is called the snake illusion, which is really curious. This has, again, appeared in quite a number of textbooks, and we do not have a really good explanation for it. But we have another one, which I think is related. OK. So what I'm going to do here, this is called the waterfall illusion. So what we are going to do here is I'm going to set this in motion. In this case, I don't want you to fixate. What I want you to do is just keep looking at it in various places. You can look here, here, here, and here, and so on. OK? And I'm going to let it run for a while, and then I'm going to stop it. OK? And I want you to tell me what is your aftereffect. Ready? OK. I'm about to stop it. Are you ready? Pay close attention to what you're going to perceive once it stops. Anybody see what happened? What did you see? AUDIENCE: [INAUDIBLE]. PROFESSOR: You see the water running up here, right? OK. I'll do it once more so you can appreciate it. Pretty nice, huh? OK. So this illusory effect is one that also occurs interocularly. And it has been shown that when you record from area MT, we talked about the fact that area MT is a region where cells respond selectively to direction of motion. Well, people have shown that if you, for an extended period of time, a minute maybe or whatever, you stimulate a cell in MT with this kind of a display, then after you stop it, OK, the cells with the opposite sense of motion will discharge some, which then is attributable to the upward sensation that you have. So it's clear enough that this is something that does take place predominantly by virtue of direction-selective cells and, probably, by direction-selective cells in area MT. All right. So then what we are going to do here is we're going to look at some other limitations and ambiguities in perception. And in particular, we're going to examine some figure/ground relationships. The first one I'm going to show you is this one. If you look at these two diagonals, OK, a and b, they certainly look very, very different, don't they? OK? One looks shorter and a different angle. But now if I eliminate the trapezoid in the surround, you can see that they're actually identical. OK? I'll show it to you again. Pretty dramatic. So that's an interesting illusory effect. And it's clear enough that this is something that takes place in higher cortical areas. Yet another one here that I'm going to show you also has something to do with figure/ground relationships, which has been extensively studied, especially by Gestalt psychologists. And I spoke about this a little bit the last time. What you see here is a bunch of vases, so to speak, right? That's good enough. So what I'm going to do now, I'm going to show you another set of vases. Are you ready? There is another set of vases. Everybody see the vases? Now, what if I tell you that there's much more to this than just vases? So if you look really closely, you're going to see something else there. What else do you see there? AUDIENCE: Faces. PROFESSOR: Oh, OK, two faces, two guys facing each other. Now, if I change the relative number of elements in here, which changes the figure/ground relationships, you get a different effect. Here you have-- predominantly you see the faces rather than the vases. OK? And that's a cute way of putting it. I see the faces, not the vases. And then you switch it, and you say, I see the vases, not the faces. All right? So that then is a way you can play around, again, with various illusory effects by fiddling around on your computer and creating things like that. OK. Now, yet another strange effect is where you can have difficulty telling what's what is this kind of a display. You say, well, what's happening here? Are arrows pointing to the right, or are they pointing to the left? And that reminds one of a fellow called Yogi Berra. How many of you have ever heard of Yogi Berra? Oh, my goodness, very good. OK. So he was a famous catcher for the New York Yankees, right? Well, he was a very interesting guy, had all kinds of interesting statements. And one of those many curious statements, one of them is-- it doesn't relate to this. It just comes into my head. "It isn't over until it's over." That's one of his, OK? This one here that's relevant is, "When you get to a fork in the road, take it." Yeah? And that's a Yogi Berra quote. And so what we have here is a sign that you come to a fork in the road, and you have to decide to take it. What are you going to do? Are you going to go this way, or are you going to go that way? So that's a kind of game you can play with this kind of strange effect that you can create that is confusing your ability to organize your visual percepts. Another famous one, maybe not that famous, actually. Maybe some of you have seen this. This, again, has appeared many, many years ago in The New. Yorker. It says, "I'm turning into my mother." Now, keep looking at it and tell me, why do you think that that comment is being made? What we have here, a young woman looking at another young woman, right? But now if you keep looking at this display, look. This is a nose here, and this is a mouth. You see an old lady now? OK. You played around with this, it gives you a confusing percept. And ultimately, you can see the young person looking that way or an old lady looking the other way. I mean looking not backwards but across. And that's why this statement says, "I'm turning into my mother." So the essence here is that playing around with these kinds of illusory effect is a lot of fun, and we all very much enjoy looking at illusions. And some of us like to play around with them to try to figure out what their existence is for the reasons. Now, I want to show you one more, which I think is very appropriate at this time. This is the famous Greek key motif. Everybody's seen this, I presume. Does everybody know where this is displayed, actually, at several places? In the Senate. OK? Whenever you have a picture of the Senate Building, you will see this. All right? Now, that's a little bit amusing, actually, if you think about it. So this has become symbolic of democracy. And considering just what happened most recently, you could almost say that, perhaps, what's happening here is that you cannot tell whether we see this guy or this guy here going around. OK? And one of them you could say, oh, these are the Democrats. These are the Republicans. And you have sort of a conflict as to which ones you see. All right? And that is, in a way, symbolic of what's going on today. But to educate you a little bit, which I suspect probably most of you know already anyway, this Greek key motif actually has its source in the so-called myth of the Labyrinth that had imprisoned the Minotaur. Who knows who the Minotaur is? Oh, very good. Many of you do. All right. So the Minotaur was a creature that was half bull and half human. All right? And what happened, according to this myth-- and let me tell you it's a myth, and it's a bit of risque thing. There was a king called King Minos whose wife Pasiphae-- Pasiphae, I'm not sure how you pronounce it properly-- is the one who had intercourse with a bull and gave birth-- sorry, had intercourse with a bull. And as a result of that, gave birth to the Minotaur, which was a half-bull, half-human monster, if you will. All right? And that became so threatening and so irritating, I guess, to King Minos that he asked one of his subjugates, Daedalus, to create a maze, a huge maze, and he put the Minotaur in it. So it was such a maze that the Minotaur just couldn't get out of it. All right? But then things got progressively worse, and King Minos decided that he had better get this Minotaur killed. And so he asked a person whose name is Theseus, OK, to go into the maze and kill the Minotaur. Now, the king also had another offspring whose name is Ariadne-- nice name, Ariadne. And she said to Theseus that if you take a ball of thread and you walk through the maze, once you do what you have to do, you can follow the thread back, and that way you can get out of the maze. So that's what Theseus did, and then he went on and killed the Minotaur. Now, the story didn't end there because somehow, for reasons I'm not sure I know the details anymore, King Minos had a fall out with Daedalus, and he jailed Daedalus and his son Icarus. And while they were in jail, they tried to figure out how they could escape. And so Daedalus fashioned a pair of wings made out of bird's feathers and wax, and then put it on Icarus. And then Icarus, by doing this, he could fly. OK? And then he flew and escaped from the prison. But unfortunately, he flew fairly high up where the sun was very hot. That melted the wax, and Icarus, sadly, fell to his death as a result. So that's quite the story. And that's something that obviously some of you already know. And those of you who don't, you will probably remember, as so many Greek myths are just fascinating, ancient, old stories. All right. So then here is actually a reconstruction of the kill of the Minotaur by Theseus. And this is actually a sculpture that exists in Greece. Pretty nice, huh? OK. Now, the last confusing image that I wanted to show you is this one that had appeared-- I think this appeared also in The New Yorker, but I'm not 100% sure. What you have here is a confusion-- do we have three or two prongs here? And the reason I show this sometimes in class is because in our department when it was originally formed by Hans-Lukas Teuber, he created what is called three prongs. OK? And then it became sort of an issue as to, well, what are the three prongs? Are there two prongs? Are there three prongs? Who belongs into which prong? And so somebody came up with this picture that was put up as a joke in the department when the department was still in one of the old buildings that doesn't even exist anymore. All right. So that then brings me to the end of the illusory effects that I thought we'd talk about briefly. And I will now move on and talk about visual prosthesis. All right. Now, as far as prosthetics are concerned, you're going to hear quite a bit about that from Chris Brown because in audition, an auditory prosthetic device, the cochlear implant, has become an incredible success. There are well over 50,000 people in the country now who have a prosthetic device like that that enables them to hear, to converse. It's really incredible. Unfortunately, when it comes to visual prostheses, we have nothing comparable at this stage. It's a much needed device that will take many, many, many more years to create. But I thought I'd give you sort of a crude sense of how that kind of work progresses because there are many laboratories that are trying to come up with something that will provide an ability for people to see. Now, I can tell you that in the world there are more than 40 million people who are blind. In the United States, there are more than a million individuals who are blind. So because of that-- that's a huge number-- it, of course, is extremely desirable to come up with some device to do this for you. Now, this is a very, very long history, and people have tried all sorts of things. One of those is to try to induce the ability to see something in the world that you normally see with your eye by putting it through some other sense. Now, a simple example of that is Braille. All right? Everybody knows what Braille is, right? That's you can use your hand to feel something that has protruding elements in it that can be shown in terms of letters. And so as you get into an elevator and you can't see, you can feel it, and then you can push the right button to get to the right floor. OK? Now, that's one way it has been done. Another approach had been to present stimulation to your somatosensory system, either by putting something on your back or putting something on your fingertip, that an image would be converted into actually activating your somatosensory system that would be equivalent to something that you would perceive if you could see. So that's the basic thing. And a tremendous amount of effort has been going on to try to come up with something to create a workable prosthetic device for the blind. Now, there's a lot of disagreement as to where such a device should be placed in the body and also a lot of debate of what kind of procedures should be used. Now, some people advocate that it should be done on the retina. Some people advocate it should be done in the lateral geniculate nucleus. Actually, some work is being done at Harvard trying to come up with something for the lateral geniculate nucleus. And there's some people who think it would be best to do it in the visual cortex. So let me sort of go into this in a bit more detail. And let me, first of all, raise the question of, what kinds of devices, and what kinds of necessities do we have for converting vision into something that enables blind people to see? Well, first of all, it's very important to be able to see some basic patterns, very important to see motion, and very important, often ignored, is that you got to be able to see something in the third dimension, because a person who is blind, he should be able to walk in the world. And if you're going to walk around, you should be able to see where things are in depth. So those are the basic requirements. You're certainly not required to process color information. So that's not important. And there are several other things that are not important. But these are the three most important ones that you would like to have if you are going to create a prosthetics device for the blind. Now, then the next question is, how are we going to deal with this, and what kinds of issues and problems are we going to deal with? First of all, the big issue is what kind of prosthetic device should we use. One of those that I've already mentioned is to try to convert visual impressions into other modalities such as your somatosensory system. All right? The other is to create some sort of stimulating device, which there are various kinds. But one of those would be, for example, to put a bunch of electrodes into the brain, or into the retina, or whatever, and then selectively stimulate through this series of electrodes to mimic the visual field. Now, the other one, and it is a big issue, what brain areas should be considered? And because we're at an early stage in this, I think it is desirable for people to pursue just about any area they think might work. Now, the problem with a retinal implant-- there are two problems with it. One is the retina is extremely small, and very confined, and has these millions of receptors and ganglion cells, and how can you selectively activate them? Now, the biggest problem, though, is that in most blind people, the retina becomes non-functional. And even worse, it degenerates in a relatively short time so that it no longer is useful for electrical stimulation or for whatever kind of stimulation you wish to engage in. So because of that, some people have moved on to the lateral geniculate nucleus. That area has some problems because it's deep in the brain, and it's quite small. OK? And also, it tends to degenerate over time, although it takes it a lot longer than the degeneration of the retina. Thirdly, people have considered stimulating the visual cortex, and that's what I'm going to talk about in a bit more detail just now. But the last thing that's very important here-- I think it's the last thing. Yeah, just let me go back. Ah, sorry. The third big issue is, can you create a device that has longevity? As soon as you put something into the brain and into the eye, there's going to be some sort of adverse reaction that may kill the cells at the tip of the electrodes or something like that. So all kinds of approaches have been created to try to increase the longevity of the device. Now, I'm going to next look at what happens in the visual cortex, which is a promising, at least I think it is, a promising site where you could electrically stimulate and use that to create a visual impression. All right. If you do that-- here is a monkey brain. The monkey is very well suited for this because, as I have mentioned before, the posterior part of the cortex-- see, this is area V1-- is lissencephalic. And so it's relatively easy to place accurately electrodes into it. OK? So then if what you do here-- here, just a reminder, is the central sulcus. Here's the lunate sulcus. And here, of course, is area V1. If you now take the contralateral hemifield, that projects into this region. OK? This is shown out to about five degrees. That will go to around here like that. OK? So now the next big question that, initially, had been done actually in humans, what do you see when you insert a microelectrode into the brain-- as I say, this was done on humans initially-- and you electrically stimulate? What do you see? And this work was done by Brindley many years ago. And he discovered that when you electrically stimulate in the human primary visual cortex, what you get is a small, star-like image. All right? So that's what he found. And so that then triggered a great deal of research, including work in monkeys, to determine what could be done to create a prosthetic device that might just work. So the first step then was to try to determine what happens when you put a microelectrode in here. This is, by the way, the layout of the visual field in the posterior cortex with much more area devoted to the fovea than into the periphery because, of course, there's a much higher packing density of receptor cells as well as retinal ganglion cells in the fovea representation than further out. In the cortex, the thickness of the gray matter is constant. It's about roughly 2 millimeters thick. And the frequency, spatial frequency of the cells, is pretty constant as well. So therefore, you need to allocate more space to the incoming huge number of retinal ganglion cells projected to the geniculate and then to the cortex than for the periphery. And that's why you have this kind of a layout of much more space allocated to the central than to peripheral vision. So that's a very important factor in considering how to create a prosthetic device. So now the first step in doing this kind of stuff, if you're going to do serious experiments on monkeys, is to determine, what does the monkey see when you electrically stimulate the visual cortex? And so the experiment that's been done here is one in which a monkey has been trained to make an eye movement to the bigger of two visual stimuli or the brighter of two stimuli. In this case, we have two visual targets. By the way, one of these is going to appear where the receptive field is of the cells that you're going to stimulate in the visual cortex. So then you may sometimes make this one brighter, this one brighter. And the monkey makes a saccade to it, and he gets a drop of apple juice for reward. And then you can systematically vary the relative size of the two stimuli to see what kind of a function you get. And the same thing you can do for the relative brightness of it. And then on some trials interspersed with those trials, you present [INAUDIBLE] stimulation with the visual target and see which one does the monkey go to. And then you systematically vary, as I said, the brightness or the size of those stimuli. OK? We vary the contrast in this case, and in this case, we vary the size. OK? So that's the basic experiment. And if you do that systematically, you can tell exactly what the size and the contrast is of the visual stimulus created by electrical stimulation. If you do that-- here's an example. When you present two visual stimuli and you vary the percent contrast difference, all right, of the two stimuli, you can see that it crosses over, indeed, when they're identical. So that tells you that this system works. It says these are identical. And the same thing is true for the size series. So this way then, if you do the same experiment with electrical stimulation, you can find out where this 50% crossover point is. This is for contrast, and this is for size. So then you can establish this with slightly different locations in the visual field. And if you do that overall, you come up with a summary statement, which says that when you use currents between 20 and 120 microamps, OK, at eccentricities between 2 and 1/2 and 3 and 1/2 degrees, the contrast of the visual percept created in monkeys is 6% to 12%, and the size is between 15 and 20 minutes of visual angle. Now, that's quite small fortunately. So it's a local, little spot, like a star-like image. Now, what do we mean by 6% to 12% contrast? So let me tell you this, that that's a useful thing for you to memorize. We talk about percent contrast. What you do is you measure, for example, in candelas per square meter, what the contrast is of this and what is the contest of the background. Then you subtract one from the other, divide it by the sum of the two, and you multiply by 100. So when you do that, this is roughly 8% contrast. This is roughly 75% contrast. So it gives you a sense of what contrast means. All right? So what that means, in essence, that the visual stimulus created by the electrical stimulation is fairly low contrast. All right. So now the next important factor, once one knows what the visual percept is that is created by electrical stimulation in the monkey, the next thing is to ask the question, well, what kind of electrode array should one put together? All right. Well, the big issue here has to do with the so-called magnification factor that I just talked about. So let's look at this analytically. First of all, I already mentioned this. We have a very high packing density of photoreceptors in the fovea and much lower density further out. And so what you have is that this plate goes through the optic nerve up to the visual cortex to the lateral geniculate nucleus. So that's the basic arrangement, meaning that there are many, many more retinal ganglion cells per unit area that impinge on the visual cortex near the fovea than in the periphery. So then to examine this, here we have, again, another picture. It's the geniculate again. Here's the fact that the cortex is constant in thickness. And here's the layout of the visual field on the posterior cortex with more area allocated to the fovea than to-- excuse me, than to periphery. All right. So now I've shown you a variant of this before. If you present an arrow across the visual field here, this is the impression made with much more activation in central vision than in peripheral vision. I mean, you don't have to worry about that. That simply tells you, as we had discussed before, that the images created depend heavily on where you stimulate in the visual field. OK. So now to look at this quantitatively, so we put a bunch of dots up in the visual field. This is in the central four degrees, and this is the actual activation in the visual cortex. OK? Meaning that in the fovea, we have two elements here, this one and this one, here and here, that they take up a lot of space, whereas in the periphery, OK, they're very close to each other. That's due to the so-called magnification factor. And so one of the basic rules is that we've got to be able to stimulate the visual cortex taking into account the magnification factor. So if you have this kind of spatial arrangement, if we then were to put a bunch of electrodes in like this, which we will call proportional electrodes, then if you stimulated all those, you would get a similar square with one proviso, namely that the little dots would be smaller in the fovea, obviously, than in the periphery. So then if you have this kind of arrangement, what you can do is you can take a camera that has a fixed number of elements, and each of those elements you hook up to one of the electrodes. If you put the word "fiat lux" there, this is what you hypothetically would create, like that. OK? That's what it would look like. So in contrast with this arrangement, if we were to use an electrode array, which you can readily buy-- you cannot buy proportional arrays. You have to somehow create them yourself. But if you buy a fixed array, which the elements are equidistant, then if you stimulated all those you would get a butterfly image, butterfly image. I will show a movie of that as well. And that means that if you do that same conversion as before, this is what you would see in contrast to this. So in other words, it's a greatly distorted vision because you didn't take into account the magnification factor. All right. So then what you can do, actually, you can also increase the number of electrodes in the center here. OK? And that would mean-- in this case we have 428 elements-- that you can recreate fiat lux pretty well with this increased number of elements, where with the original set you couldn't make anything out. Now, the alternate to this would be that you could take a whole bunch of equally-spaced electrodes, which you can commercially buy, and put them in. But then to get the same acuity, you would need, in this case, something like over 600 electrodes, whereas in this case you would only need about 420. So because of that, obviously we want to minimize the number of things you put into the brain, it's desirable to use a proportional device. So that's the first big step. Now, let's assume that you have created a proportional device. The next thing would be to see, how do you stimulate these through a camera? So what you do then is an experiment in which you take a human subject, and you can put a camera on a helmet, and also, the person looks at a monitor. And otherwise, the rest of the world is excluded by having this bezel here. And then the camera looks at something, and then it can display the image here. Well, one thing it can do, it can display the image as it is. But the other thing that's much more important, it can display that image as if electrical stimulation had taken place. Now, this becomes complicated because you have to do a lot of conversion of the input from the camera to the computer that's going to then drive the electrodes. Now, the first important point, I mentioned this before, is that if you present a small spot-- this is actual real data here-- you present a small spot in the center of a small array of cells in the visual cortex, you get a vigorous response, whether a light increment or light decrement. But if you use a larger spot, you get no response, meaning that the cortical cells look at local differences. That has a lot to do with what we talked about with the illusions. And so what you have to create is an algorithm that mimics that. And so the way you do that is you have these individual elements for each of the electrodes. And if that individual element in the computer that gets the input from the camera, if the whole element gets activated, there's no real activation of the electrode. But if an edge appears like here, here, or here, then the electrode is activated. OK? So you create local differences by this conversion system, all right, by this algorithm that then, when the camera looks at the world, will respond whenever there's an edge, an illumination difference. OK. So now the way this looks is like this. Again, as I've told you, whatever is in the fovea, the dots are smaller and further out, but that's OK. So that's the basic activation system. And so now what we are going to do, having created such an algorithm, we're going to examine what the person who carries this camera on his head can actually see. Now, the way this can be done, you can either physically move the stimuli in the world, or the person looks at it and can slowly move his head around. OK? So let's first look at what would happen if, instead of a proportional display, we used an equally-spaced display like that butterfly display. OK? So what I'm going to show you here on the left is what the camera is viewing. And here I'm going to show you what is actually perceived by this algorithm, by the person, where this image is converted into what he's looking at through that bezel. OK, you ready? I mean, you don't see a square at all, right? What you see is, because you're not taking into account the magnification factor, you get a huge distortion. So now let's go move on and ask the question, well, what happens when you are using a proportional display? All right? And now we get a rather nice story. Again, the visual display's on the left, and the image created is on the right. Ready? So the rule, therefore, is that you must use a proportional image, proportional display, that can be done either by having it truly proportional as you put it into the visual cortex or by having an equally-spaced array, and the computer computes the proportional arrangement. But the disadvantage of the latter is that you need many, many more electrodes. All right. Now, what I'm going to do is I'm going to show you whether you can read something when you use a proportional display like that. And in this case, we're going to have a bunch of big letters, and either the letters are going to move slowly across the visual field, or your head is going to move slowly with the camera like that to view that display that's up on the wall. OK? So if we do that-- are you ready? See if you can read it. OK. Everybody able to read that one? OK. Who remembers what "fiat lux" means? I've only shown it about four or five times. Let there be light. OK. So now, just to be [INAUDIBLE] attractive about it, I'm going to show you one more display. And this one is a display, again, a series of letters and words, is one that John F. Kennedy has used in a very, very famous statement, which I think most of you will remember. And eventually, as you see the whole thing, you will be able to put it together. Are you ready? OK. Everybody's seen that, right? People nowadays, because of what's going on, are less inclined to take his advice. All right. So that is then the essence of what I was going to cover today. And so let me now provide you with a brief summary. I will start with the prosthetics then. OK? Research on visual prosthetics is in its infancy, as I've noted. A great deal of basic research is needed before such a device can become effective. That's very unlike what you're going to hear about the auditory system. The brain area that holds considerable promise for the prosthetic device based on electrical stimulation is area V1. Now, a prosthetic device for electrical stimulation of V1 must take into account the magnification factor. That's absolutely essential, and too many people are ignoring it. Now, in a different color, we're going to talk about the illusions. There is no unitary explanation for the great many visual illusions extant. I pointed out to you that different illusions have different sources and different explanations. The most popular theory explaining the Hermann grid illusion based on the center/surround organization of retinal ganglion cells is incorrect. That's a polite way of saying it's all wrong. A more likely theory is the one that assumes that area V1 cells are involved. Retinal adaptation processes can explain illusions based on aftereffects, and you've seen quite a few of those. And many illusions disappear under isoluminant conditions. And lastly, there are no viable theories that explain illusions based on figure/ground relationships. We know what they are, but we don't know how the brain resolves figure and ground effects. So that then is the end of what I wanted to cover today. And next time, we are going to talk about eye movement. And I hope that you will enjoy that as well.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_17_Multitask_Learning.txt
So today, we're very pleased to have as our second, um, invited speaker, Richard Socher, he is the chief scientist at Salesforce. Um, Richard actually also has a lot more connection to this class, um, because, um, for several years, um, Richard was involved either as instructor or, um, co-instructor in teaching this material at Stanford, um, so he sort of knows the course, um, pretty well. Um, and so today, he's going to be talking about some of the challenges and recent work in doing multitask learning in natural language processing. So welcome, Richard. Thank you. Hello, everybody. I'm excited to be here. Uh, yeah, I want to talk to you today about what we, in short, called decaNLP. I want to first give a big shout out to Bryan McCann. He's the first author of this, uh, paper, and I've pitched this idea to a lot of people in the last, like, three to four years, and most people were like, "This is too much pre-processing because you're trying to do 10 different tasks in one model." That's sort of where the decathlon, uh, wording comes in, uh, but he, he really stuck to it, uh, did all the pre-processing and all the things that you now know like tokenization, and it turns out a lot of different data sets, have a different conception of what a word is. This wasn't two words, uh, or one word, and things like that, and that changes how you write all your evaluation scripts and all of that. So Bryan, uh, is, is a really phenomenal researcher, uh, with us in the group, and Nitish has helped us a lot on the optimization side of this, uh, and then Caiming Xiong, the Director of Research, has done a lot of, uh, really phenomenal work that's kind of helpful in pretty much all our projects. So I'm going to tell you a couple of different, uh, lines of reasoning that led us to, uh, this idea of multitask learning. And the first one was sort of trying to take a step back and looking at the field, and I noticed not like that much of a historical class but basically pre-2010, most natural language processing had kind of these very hand-designed features, and we basically just had, uh, machine learning kind of learned weights, uh, in the optimization procedure for these human-designed features. And so in 2010, Chris and I and others sort of started to work in deep learning for feature learning. So everything was a word vector and now, we can back-propagate into them and actually learn those representations. And I think currently, we're kind of in a state where we do a lot of deep architecture engineering for specific tasks, and you've seen this already. You have like an NER model, you have a question and answering model, you have a translation model, and we basically now, each of these communities has at least, uh, converged on is probably some kind of neural network, but there's still a lot of different kinds of architectures of these neural networks that you're working on for each different task. And so the question is like, okay, we're gonna probably do that for another couple of years because we're making good progress, but what's sort of next, uh, on the research side? And what I actually love about this class so much is that you go from like maybe not knowing much about NLP at all to you can basically understand the state-of-the-art research papers as they come out now, uh, and this, this is one of those. Uh, so [NOISE] why, why not continue to work in this multitask regime? In some ways, I feel like, uh, the community is a little bit, uh, like this cute dog, where we, kind of, randomly restart, uh, after every project. And it's kind of clear to me that if you have a lot of training data, uh, and you define a specific data set and task on that data set, you start to architecture engineer in your model to hill-climb on a particular metric, or leaderboard, or publications, or products, or whatever it is, uh, then as long as your data set has roughly a good representative set of 1,000 times the number of output classes that you have, you'll probably get it into a regi- regime where you're in the 80 to 90 percent accuracy, or if one, where you're basically doing pretty okay. And of course, now when you look at trends on ImageNet, you have 1,000 different classes in computer vision, 1,000 different classes, each has 1,000 images. So if you have roughly a million images, you do pretty well. And in machine translation, ideally, you know, I have many more, I have like hundreds of thousands of words, so you want many millions of examples of each of the word in their, uh, words in their context. And of course, you know, that the caveat is machine translation doesn't work to the level of humans, but it works well enough to have it at least in products, and even the best human translators use it as sort of a pre-translation and then, uh, sort of, clean it up. And so it's also clear to me that in this regime, and if we want to get to, sort of, more general AI features, uh, we need to have some kind of more continuous learning of a single model. Because if we keep restarting at every project, we're never going to get to a single model that, kind of, encompasses more and more of the complexity of natural language. And, uh, when I say we start from random, you of course know that that's not quite true because we do have some things that we pre-train, namely word vectors, and in computer vision, we have even more things. And so in some ways that is, ah, an aspiring ideal for NLP, because in computer vision, you would be, kind of, crazy to not use some kind of convolution neural network that has pre-train- has been pre-trained on some kind of tasks like ImageNet when you start with your project and try to classify objects or do object detection and a lot of other things. And in some ways that the whole community could get behind it very quickly, because I mean, you know, once it worked, uh, reasonably well, because there was a, sort of, single blocking task in computer vision. If you can't even tell apart a dog from a cat from a house, it doesn't really make sense to think of even larger, uh, vision projects. And in NLP, we've had a lot of success with word vectors, you know a lot of those now, and it started for, sort of, just a small, uh, window-based approach or Word2Vec and GloVe, uh, then we had, uh, context vectors that were trained, uh, on machine translation, but basically, instead of just having a single set of words, we actually pre-trained some of the NLSTMs that came on top of those word vectors, and, uh, the way we train that, uh, was also actually Bryan McCann's paper on contextual vectors with machine translation and then ELMo, kind of, replaced machine translation with, uh, language modeling, which of course is even better because there's even more training data, and it still tells you a lot, uh, and kind of captures in some ways a more complex version of distributional sort of hypotheses that we had in simpler word vectors, and BERT, not quite a language model but also, kind of, trying to predict words in their context, uh, but pre-training a lot more layers and a lot deeper networks. And so we see the success of pre-training a certain set of weights. And so the question is, why not try to pre-train the entire model? As in including your output, your softmax, your pointer mechanisms and everything, and then just taking a completely pre-trained model and trying to do something, and that is, kind of, the goal that we have. And so, uh, we, sort of, ask ourselves why hasn't this happened? Why are we, you know, the first to think about, like, trying to pre-train the entirety of the model, the encoders, and decoders, and outputs, and everything. Uh, and I think part of it is that NLP requires a lot of different kinds of reasoning. You've seen many of them already. You have some logical reasoning like 550 people in this room, 25 leave, are there still people in the room, and you logically can answer that question, and you have lots of different kinds of linguistic and emotional reasoning, sentiment analysis, you know, this is a typical Nicolas Cage movie and then you need to know that that's a probably negative review unless you like Nicolas Cage movies. Um, no judgment. And, uh, you know, visual types of reasoning and so on. And so I think partly because of that complexity in the beginning to feel, didn't really make much progress and now and then kind of separate it. And I think in some cases, kind of artificially separated into all these separate tasks, like you have named entity recognition, part of speech tagging, and semantic role labeling and, and so on. And, and in some ways- and it sounds kind of snarky but, you know, it made a lot of sense at the time, and it allowed us to make a lot of progress in the community, but basically we started chasing these benchmarks, and all these different communities, kind of, started going off in their own ways. And we even have some communities that say, "We do general question answering, and there's literally workshops on general question answering, and when I asked, uh, the organizers, "Can I ask your model what the sentiment is of this tweet?" They're like, "No, that's sentiment analysis. Go to that different workshop. It's down, down the hall." But I'm like, "That's a- that's a question. Why can't you answer it in the general question answering workshop?" Um, and so a lot of people then say, "Well, if you want to work on more general stuff, it has to be an unsupervised, kind of, task and the, the feature will not be supervised." I don't think NLP will be completely unsupervised, and we won't solve it, uh, completely unsupervised, because in the end, language has a lot of supervision for people, uh, and, uh, I think for, for systems also. Uh, and you won't, you know, if you have- there's a child and it's in a jungle, it will probably develop a pretty good visual cortex by itself, but it won't develop language by itself. And then- and then also, like, I think if you'll just allow AI's to talk to one another, it makes very little sense for them to try to come up with as inefficient of a communication protocol as humans have with, you know, sequential processing of language because algorithms and computers could, if there's no supervision of human language, they could just communicate in much more efficient ways with one another. So I think it's fairly clear, we need a lot of supervision, uh, in NLP. And so basically, all of this has led us, uh, to trying to think about a unified multitask model for a lot of different NLP tasks. By the way, if you have any questions, just raise your hand. Okay, let's make this very interactive. Um, basically, we want this unified model, uh, to decide how to transfer knowledge, uh, and not have it, sort of, be manually assigned. Like in most cases, when you assign your project you say, "Oh, well I know that named entity recognition part of speech tagging help each other. Because once you know something is a noun, then it's more likely that it's also a named entity." And in this case, we want to basically allow for the single unified model to know itself how to do domain adaptation and wha- how to share the weights, and that will hopefully then lead to a lot of, uh, transfer learning and zero shot learning capabilities. I also think that if we get to this, sort of, hard goal of having a single fa- single unified multitask model, then we'll easy- be able to more easily adapt it to new tasks and we'll be also able to deploy it in production more quickly. If nowadays you want to build a little squirrel detector and connect it to your sprinkler system, you can just download some off-the-shelf software, and it will basically, kind of, work. That is not the case if you try to do a pretty complex language project where you want to translate into some completely new language or, you know, analyze some website and then do something else afterwards. So, uh, you also, when you actually try to deploy and use these kinds of tools and companies, you'll realize that there are a lot of different kinds of groups. There's the search group, and the chatbot team, and the translation team, and, uh, and the social sentiment analysis team, and they all use different models, and they all deploy different models, and they all have to build a lot of overhead into the core of the- or around that core of an AI model. So basically, um, lastly, it was, sort of, what we had with, with this dog. I think that once we have this unified model, it will also be a first step to being able to then continually learn this and just have a single model that just gets better and better over time and starts to capture more and more of the complexity of language. All right, any questions around, sort of, the motivation high level? All right. So then, uh, it's sort of the question, how do we actually make that happen? And then we -- I first sort of sat down and looked at, like, the general sort of formats of all the tasks that you may experience in this class and that NLP sort of has as a field in general and I think they can broadly classified, be classified into these three different categories. Sequence tagging, you already know. Things like NER or aspect-specific sentiment or in a specific context we want to classify if a word is positive or negative. Uh, and then text classification, just a single label for the entire piece of text and then sequence the sequence a lot of different, you know, problems fall into that and I actually personally love, uh, these three particular tasks: machine translation, summarization, question answering. Because they are immediately useful that you don't have to explain to somebody, "Oh, but why do you need the semantic role labeller or parser? " If you're a layman and you, you know, on the Internet you understand immediately why it's useful to do summarization, question answering, or translation and an improvement in those tasks kind of immediately translates in- into better products, uh, and people being able to communicate better and more efficiently with language. So, that, uh, kind of analysis led us to think, uh, about these what I call three equivalent supertasks of NLP. Uh, and basically they are language modeling, question answer now- question answering and dialogue systems. Uh, language modeling, basically trying to predin- predict the next word, you've already worked on that. Uh, and usually it's only used to rescore or basically to pre-train these days. But really if you ask me a question and then you try to predict the next couple of words, then that is also language modeling and if you're able to predict the next couple of words after a question, like, what were the named entities in the sentence and then you just generate, you know, Dresden was a location, Richard was a person and whatnot. Uh, then you can kind of cast almost all of these tasks into language modeling. Uh, similarly question answering, you can ask any kind of question, what is the translation, what's the summary, uh, and so on, and then with dialogue right now it's kind of tricky because there are no really good dialogue datasets out there and a lot of times you want some interaction, you have to run user studies and most of the existing NLP task would basically be pretty short one-step dialogues like what are the named entity tags, and you give them and that's it. So it's a little bit overkill and because of that we basically converged, uh, on question answering as our main formalism. And here is now an overview of the 10 different tasks that we have, uh, and we cast all of them as question answering. These are literally the tr- the training, uh, the format of the training dataset, uh, and eventually also the way we formulate the test set and you'll see basically for every single task, you have a context as some kind of document. It could be a Wikipedia article, it could be a tweet, it could be a longer document, whatever, and you ask a question about it and you want to generate an answer. And I'm actually -- I'm curious if you can think of any task in NLP that couldn't be formulated in this kind of structure. Uh, so, let's go over some of these. Uh, the first one is sort of the standard, uh, task that all- you're all familiar with now. The SQuAD, Stanford Question Answering Dataset. Uh, where the answer is essentially a phrase somewhere in the context. But then, uh, the second one is something that you would never see in most, uh, generalized, uh, question answering workshops and that is, uh, having a context of the single sentence asking what is the translation from English into German and the output is again a sequence of words but in this case, and we color them differently here. Uh, this is blue because all these words are basically not in the context and not in the question and we will just generate them with a standard softmax to basically answer this question. We can also ask what is the summary and you can see that those two in some ways is artificial to make them into a natural language question. You could just say translate or summarize and this is just like one kind of task token in your network but actually half of these tasks. It makes sense because the question also has ac- is different for every example. So this one here is natural language inference, NLI, uh, She covered also where we want to ask whether two sentences entail each other, contradict each other or there's some neutral relationship between them. You've seen a lot of sentiment. And this here is kind of important. We actually asked is this sentence positive or negative versus just what is the sentiment and what- why that is important is that you see here in green, this answer here actually comes from a word into question and if we formulate it that way, we can eventually do zero-shot learning where we ask a new question that was never asked before for a new set of labels and magically, in some cases, it still actually works and we'll, you know, ask que- we can ask questions like is this story happy or sad and it will still give us an answer even though we've never given it a trained dataset of a bunch of happy and sad stories. So, it's kind of zero-shot classification that you get to in some cases if you formulate your questions in a way that the answer is part as a word in the question. Then we have semantic role labeling here. So what has something experienced, kind of a random weird question. Then we have a zero-shot relation extraction who is the illustrator of Cycle of the Werewolf, we also have some dialogue state tracking. What is the current state in- in a dialogue and the context just keeps on growing with the dialogue and then we also have SQL, Wiki SQL translation tasks but not translating into another natural language translating into a SQL database query. It's actually a super-helpful task. There's a, you know, a lot of data out there that is stored in databases. If you can access it without having to ask somebody who knows how to program SQL it will make that data available to a lot more people so they can analyze it and like business analytics and so on. And then here, Winograd Schemas and anaphora resolution. Uh, some people call this kind of common sense reasoning but it's kind of, you know, mostly just anaphora resolution trying to understand in this context. Uh, what -- who's, you know, uh, the word like who had given help, was it Susan or Joanne, and then based on this context, you can kind of should be able to figure that out and again here, the question is different for every single example. All right, yeah? When you're testing it -- like when you ask, is this sentence positive or negative, does it sometimes, like, [inaudible]? Great question. So, the question is when I ask, is this sentence positive or negative will it sometimes eventually accidentally switch to a different one of the task and, uh, we actually have a slide on that and the answer is it's surprisingly good at knowing how to go about doing the task and where to get the answer where it's from. Um, and yeah, they'll make more sense in a couple of slides once we go over the model. Any other questions about, uh, the question answering formalism? Are you able to formulate text generation in the question answer format as well? Like, tell me a story. Good question. So can we do text generation, uh, like tell me a story, uh, from a random kind of -- or in this kind of formalism. Uh, we don't have that as a task because largely it's really hard to evaluate. It'll tell you some random stuff and then is that a good story or not, is it grammatical, you have to come up with a lot of, uh, sort of, uh, evaluation metrics which we actually are doing for some of the dialogue systems and in case of dialogue, why does -- why are they equivalent because the context can just keep on growing and every time, uh, the user said something, uh, you basically try to then predict the next answer in that dialogue. And so I think you could very easily [NOISE] use this to generate texts. Uh, you basically just ask -- tell it like what is, you know, what's a good ending of the story and you maybe start the context with like two or three words and then you ask the model to generate more and more words, uh, in the form of this network I'll describe in a second. Yeah? I was wondering like, uh, when you're training it and you're trying to research like a new task. Uh, does it like learn with less data? That is an amazingly thoughtful question and it's- it's so important we'll have a bunch of slides on it. So maybe we'll- we'll go -- we'll continue and we'll get to that question, uh, in a lot of detail because it's sort of why we're doing it and, the short answer is yes. But we'll get to more details. All right. So these are basically the 10 tasks. Uh, and again this is the actual format for it. So if you have a problem, and you can cast it in this format, uh, you can just take, uh, the open source code and run it and, uh, it'll- it'll work. And so when you kind of analyze and think about what we've done here. In some ways, we've taken the tasks that usually is kind of in your head but it's not given to the model. The model is just given an input x and an output y in almost all of the supervised systems and instead we're actually including the task in the inputs, uh, in the set of inputs to the model. So you can kind of call this meta-supervised learning. So again the question, uh, is kind of our task definition for each of these different tasks. The model has to figure out itself when to ask the question that way it can also figure out itself when to transfer knowledge from these other tasks and y is again just the answer. So, in some ways it's meta-supervised learning and I'm quite excited because once you allow the task to be given to the model as input, it can kind of decide itself how to go about solving that particular task and now you can learn, uh, a lot more powerful models. So once we had the dataset, we thought "Okay, how do we now solve this problem?" The simplest way is you could just say, "Well, I have a big if statement, I have a classifier in the beginning and then I classify. If this is a machine translation task, then run my machine translation model." And in general, in Python that would still be just like one big python, uh, model with a bunch of if statements, right? And that's not the goal because then we wouldn't get to any of the transfer learning and zero-shot capabilities that we're hoping for. So [NOISE] we want to have the model wanted to have the capability to internally adjust to these different tasks and make these decisions itself. And basically, all of those considerations and all of those thoughts led us, uh, to this model. So before I go, uh, into a little bit more detail. I'll just like sort of give you the high-level overview. Again, you start with the context. Um, you start- you ask a question about, uh, that context document, and then we're going to generate, uh, the answer one word at a time by either pointing to the context, and you've had pointers already, right? Pointer networks, all that? Great. Um, pointing to a question word, or choosing a word from an external vocabulary with your standard softmax classifier. Uh, and we'll have a pointer switch mechanism that will kind of choose how much to weight [NOISE] each of these three generation mechanisms. So, uh, let's dig into a little bit into this model. Fortunately, uh, in some ways it's kind of just taking the best, uh, of the current sort of the state of the art techniques and putting them together in a way, uh, that- that generalize well enough. Uh, you can look at all the code on decanlp.com, [NOISE] it has like thousands of, uh, stars and, uh, and forks and stuff combined, uh, and you can, you know, basically run everything, uh, in this, uh, on these experiments with just one command. It'll double, you get all the datasets and everything and- and run everything, you can really explore what it looks like but let's- let's dive a little bit into the details of what this model told us. In some ways again, it just kind of takes all the best ingredients from deep learning [NOISE] NLP, most of which you've already learned about and puts them together in a reasonable way. So we start with fixed GloVe embeddings. Eventually, we'll- we updated, uh, the embeddings to CoVe embeddings, uh, and probably it'll work even better if you update them to BERT embeddings. Uh, but at some point we kind of have to move on and do other things. Uh, but basically, you have a fixed set of word vectors, and that is kind of important because in some of these, uh, data sets, they're much smaller than others. Uh, and as you know from SQuAD, if you actually backpropagate into the word vectors, you just do really, really well on your trained dataset, but then you won't generalize because of most of the [NOISE] text, uh, test documents will include words you've never seen before. So if you change all the word vectors during training, uh, it won't- it won't work very well at test time and won't generalize the unseen words. So, uh, fixed GloVe embeddings, if you don't have word vectors, uh, for unseen words, we also have character n-gram embeddings. Then we pipe them through a simple linear layer, and then we have a shared, uh, bidirectional LSTM with skip connections. And so, uh, it's a deep- deep one so you skip to higher layers, and it's shared between the context and the questions. So they have basically the same [NOISE] set of weights. [NOISE] Then, uh, we have a co-attention layer. Uh, where we basically just have outer products, uh, between all the hidden states of those two sequences, and again, have skip connections, uh, to circumvent, uh, those as well. So now you have kind of context or question dependent, uh, contextual representations [NOISE] or- or representations of that context. [NOISE] Uh, then we feed those into our transformer layers, uh, and we actually tried to use transformers for all the things, with having no LSTMs or any of that. Uh, unfortunately, transformer layers were still, uh, very, uh, finicky and very hard to optimize, and there's a lot of trickery with- of the learning rates, and we could just not get them to perform really well, uh, on- on these 10 different tasks. Uh, [NOISE] sometimes you had one transformer layer, one transformer network, that worked really well in one task, but the only other transformer network that worked well on the second task had like half the layers. And once you tried to have one network with the same number of layers, it just wouldn't work on either of the two tasks anymore. Uh, and so- so yeah, unfortunately as nice as they are because they're nicely paralyzable in GPUs, uh, they weren't yet robust enough, uh, to- to be used for this. [NOISE] So we have to have these LSTMs, uh, before and after the transformer layers. [NOISE] And then we essentially just have a standard sort of autoregressive, uh, decoder where given the last state, uh, we generate the next word. And then we have these three pointer mechanisms. Uh, they're very similar to the pointer ne- mechanisms you already know. But now on top of these very contextualized representations, uh, at the end of this encoder, uh, and it basically learns to either point to question words, context words based on the hidden states, or have also a standard softmax, and then we just basically have a weighted sum, convex sum, of these three different distributions of output words. [NOISE] All right. So I think these are mostly standard components that you've already saw, uh, for you- already seen all their details. But if you have any questions, um, about how we put it together? Yeah? [NOISE] So the output- the output has to be a word. That's right. The output has to be a word and it's always either a word from the context, a word from the question or a word from the softmax. [NOISE] That's- the data preprocessing I guess it's different with each task. So the data preprocessing is different for each task, but we basically had to normalize everything to have the same tokenization and- and all of that. [NOISE] Uh, so do the double arrows in the encoding just represent there's a bidirectional? Yeah. Okay. Yeah. But the double arrows, uh, here are just bidirectional. So left to right and right to left for the LSTMs. All right. So what datasets, uh, are we using? Uh, I mentioned that that was a big headache in the beginning. Uh, we definitely wanted to include a lot of the sequence to sequence tasks that we felt like are very, um, sort of high level and I- immediately useful, uh, and in some ways what this also shows you is that nowadays you don't have to work as much on some of the intermediate representations, uh, in NLP anymore. Uh, you can just directly go for the end tasks that that real users might care about, and then have these end-to-end trainable systems, uh, that really do quite well. And, uh, I've myself worked a lot on parsing. And so I don't wanna, you know, say we- we don't need it. There's certainly still tasks that you do need it for, but it's kind of surprising that you can just go directly to translation or summarization without having intermediate representations that were sort of very specifically hand-designed. Um, so we had those three really interesting, uh, and hard tasks. Question answering, machine translation, summarization. They actually also have the three biggest datasets, uh, of all of these. Uh, then we had NLI, and basically, um, all of these, uh, 10 datasets [NOISE] were, uh, publicly available, uh, and in several cases especially for translation, you could actually find much larger, uh, translation datasets, but we also tried to keep it, uh, to a- to a size where normal people that don't work in gigantic companies with huge, uh, GPU infrastructures could still run experiments, [NOISE] uh, themselves. So universities and folks, uh, can still run it on. Basically if you have just a single GPU, it'll probably take about a week or so, uh, to run an experiment. If you have multiple GPUs on one large AWS machine, you can kind of run an experiment in a day or two. And so especially for translation, right, you could get a lot more data, uh, than IWSLT. And each of these, uh, communities and datasets and- and tasks has their own metric. We actually tried to, in the beginning, we had a lot of discussion about how we should define the measure of success for this project. Uh, it doesn't make sense, uh, to have a normalized F1 score for basically all the different tasks, but then we basically realized that these different communities have different metrics for a reason. Uh, unfortunately at least all of these metrics are from 0-100 in theory. Of course, in practice, you rarely ever see, uh, a translation system of a 100, uh, or even high 90s of a BLEU score, uh, or these really, really high ROUGE scores. But, you know, in theory they go from 0-100, and so, uh, we kept basically intact the different evaluation metrics for each of these communities, and we just said we're going to sum them up. And, uh, when we first talked about this, we have- had a lot of discussion, uh, with- with others also like, oh, but translation is so much more important because it's much bigger and it's a much more useful task than you still, you know, silly like pronoun resolution Winograd Schemas which only have a couple hundred training samples. And so you should have weighted translation more and then literally five questions later somebody's like, "Why didn't you weight pronoun resolution more? That is a really hard task that captures sort of common sense reasoning and, you know, the complexity of language and semantics, and unlike all this, like, statistical pattern matching [NOISE] that you do in translation." And I was like, I used to talk to that guy [LAUGHTER] and like, uh, hopefully in the end, we'll just all agree that like it's reasonable to sum them up, uh, and of course, you also have to tackle when you run experiments in this. Uh, a lot of the complexity that you have in machine learning and, you know, stuff that very few people talk about like having very skewed distributions. So you have translation which has, uh, millions or hundreds of thousands of examples, and you have Winograd Schemas, uh, that only have a couple hundred. How do you train that such that you don't just completely ignore the smaller dataset. Uh, so we'll get to some of the optimization trickery, uh, that Nitish spent several months on in a bit. But I first wanna sort of give you the first set of experiments. So as you can see from all the numbers, there's a lot of experiments, uh, that we ran to even get to this, and so we'll walk through this, uh, quite carefully. I think hopefully you'll get some ideas also for- for ablations, or experiments that you might wanna run in your, um, in your experiments and in your, uh, problem- final- final projects. So what are we looking at here? So basically, uh, on the left side, we have single task performance. So here, each number comes from its different model that was trained, um, separately on just one task. Uh, each row- each column here is the same architecture, uh, and [NOISE] on the right side here, we basically have, uh, for each column is basically the same architecture and the same exact model. So here, we have four different models and here, uh, we have 40 different models, and each column again is the same architecture. And so the simplest, uh, first column here is just a standard sequence to sequence model with very few bells and whistles and some pointers, but nothing sort of major. It's pretty deep, you know, stack bidirectional LSTM skip connections, all the standard good well-tuned stuff for sequence to sequence models. And, uh, then we added self-attention. Um, this- this sort of, uh, basically, uh, transformer layers. [NOISE] Then we have this co-attention layer of the outer products that we mentioned in the beginning, and then we also added the question pointer. So having the ability to point to a word in a question. All right. Any questions about this table? We'll dig into some of the details. Uh, okay. Well, we'll dig into the details first and then maybe you can think of some questions. So let's analyze, uh, what's going on in this table because there are a lot of numbers, uh, and you really want to carefully analyze and sort of distinguish. I think my first, uh, observation was, wow, we can have a single architecture. Like, even, even this is not quite what we want, right? We want a single model. But even this kind of showed us, wow, you can have a single architecture that actually does really well and somewhat randomly, in some cases, it actually had gotten state-of-the-art results. So Wiki SQL, for instance, this architecture had the best model to translate natural language English questions into SQL queries, which was a surprise to us because it is the ninth dataset. It was really not like a priority for us and when we designed the model and thought about how to generate words and pointer mechanisms and so on. We just kind of had the standard context of SQL words and we asked the question what's the translation to SQL, and then, uh, somewhat surprisingly to us this particular architecture had the state-of-the-art, uh, on SQL generation and bunch of folks in that community kind of picked it up more quickly because it had state-of-the-art. And that's- uh, unfortunately, it doesn't have that many other state-of-the-art numbers, uh, which is why it's harder, uh, it's actually a much harder task. And what you also observe is that, uh, in several of the cases, uh, using the multitask model, so having a single model for all the 10 tasks, uh, actually hurts performance at first. And this is also something you rarely read in papers because papers have a strong selection bias to only publish positive results. Uh, and when you look at most transfer learning and multitask learning papers, they're sort of an outside of the actual model consideration of like, well, let's only combine tasks that we know will work well with one another. And if they don't work and hurt performance, then we'd just exclude them from our experiments. And so you don't see many negative task results, uh, in the literature and there are a few papers here and there that, uh, study basically the opposite side of transfer learning and that is, uh, catastrophic interference and catastrophic forgetting. So interference is when you train two different tasks in the same model, and to interfere with one another next, you hurt each other's performance. And catastrophic forgetting is if you train continually your first train in one task then you train on a second task, people used to think, "Oh, well, you know, basically the first task will be completely forgotten," and you just work well on the second task. If you train neural networks sort of in a sequential way one task and then another and somewhat surprisingly, uh, we- we found that things aren't actually catastrophically being forgotten in these models, turns out that if you train them sequentially and you add a little bit of the original to the first task, it comes back very, very quickly. So while the performance is really bad, you can get to the really good performance very, very quickly in very few iterations. So but it's one of the many interesting sort of tidbits that we found, uh, in the course of this that we haven't even published yet. All right. So, uh, focusing on, uh, the transformer layers here we basically find transformers do help the original sequence to sequence model a lot. So if you tune them carefully and you combine them with, uh, some bidirectional LSTMs and so on, uh, they were very helpful and improved, uh, across a bunch of different datasets, in some cases quite significantly. Another observation is question-answering and semantic role labeling, uh, actually can predict each other's performance quite well. If one works well, the other works well, uh, and- and vice-versa. If they don't work well, uh, both of them don't work very well. Um, and it's also interesting because both of those tasks have different questions for, uh, every training example. Pointing. Uh, so the question pointing, uh, is super important. Uh, we actually have in some cases, uh, twice the performance even for, and this is kind of surprising to us, a simple classification task where you could just have a standard Softmax. But instead of saying you have a Softmax of entailment, contradiction, and so on, you just basically, uh, point to the word entailment in the question. And that was also the case for Winograd Schemas that also benefited a lot, uh, from this pointer mechanism. [NOISE] Can you explain that? Sure. Um, can we explain it? Why- [inaudible] Why does it help so much? Um, in some ways, I think partly is the whole architecture has been gotten- has gotten better and better at pointing. And part of the reason we actually do very, very poorly in translation, which is the only task that hurt in the- our first experiments a lot, uh, in the multitask setting is that that is the only task that now has to generate, uh, results from a completely separate Softmax, whereas the rest of the architecture got really, really good at pointing to things to answer questions, any kind of question. Uh, and so but in some ways, I think that is one explanation, but I- I don't think it's- it's all of it. I think we still need to figure out more why this happens. All right. Now, multitask learning is the most helpful when it comes to zero-shot and I'm actually very excited about that. So this is a zero-shot relation extraction where you have different kinds of, uh, relations that you might wanna extract and you might have never seen like the student-teacher relationship that you're trying to identify in a certain context or a product company relationship or something like that. And so, uh, that one actually, uh, benefited a lot and almost got twice, uh, as high in terms of the accuracy, uh, when you learned it with everything else. So these were questions, it's never seen before, relations that it's never seen before, and it got twice as good, uh, and benefited a lot especially from having seen other kinds of questions. And in some ways, we have to give a lot of credit to SQuAD too, uh, because SQuAD as a dataset, uh, kind of pushed people into thinking about pointers as a mechanism to generate answers. And pointers, we kind of see them like as a given and they don't get that much credit, but they allow you to predict answers that you've never seen before at training time. To generate words, you've never seen before at training time, which is actually quite- quite amazing. All right. Now, the main observation though here is that you still if you had an Oracle that would tell you exactly which task you're currently in and you would be perfectly kind of separating these into 10 different models, maybe they're all the same architecture but there's still 10 different models, then, uh, you would actually still do slightly better, uh, than the first version of this multitask learning model. And that is largely because we chose to include a bunch of different tasks that have nothing to do with one another and we wanted the community to start thinking about tackling catastrophic interference, right? If you learn like a new language or, you know, you learn how to understand social media on Twitter, you don't replace all your language, uh, you know, in- in your brain. You have one brain, it keeps getting smarter, you keep learning new skills, even when that skills that are new to you are very, very different from old skills. So in some ways we may have made our lives too hard, and now we're actually thinking, okay, maybe if you wanna publish a nicer paper on multitask learning, we'll just look at all the tasks that do help each other, and then we'll just, you know, have groups of tasks, and then I can very quickly publish, uh, some, some nice state-of-the-art papers. But basically here, uh, we're still, uh, quite significantly away in the decaScore between 10 different models and a single model. Now, this of course is kind of an oracle score, that's why we put it in parentheses because you don't actually have this oracle. And in some cases, it's quite easy to build an almost perfect classifier. So, you know, separating what is the summary based on that question and what is the translation from English to German, you can do with almost 100 percent accuracy. Uh, but, uh, SQuAD, question-answering, and zero-shot relation extraction, and question-answering as a semantic role labeling, those are actually easily confused in terms of how to generate the answers and you wouldn't quite know, uh, which into which model to route, uh, this. So in some sense, this is kind of theoretical. All right. Now, I mentioned that we have this prob- this complexity in the optimization strategy and this is one of the many, um, sort of problems that don't get that much, uh, coverage. But when you have a very, uh, imbalanced or skewed dataset, it's easy to lose track and basically overpower the smaller dataset tasks. And so, uh, the first, uh, simplest training- we actually tried a ton of different training strategies, but in the end, this fully joint one worked quite well. But actually promised to ask go wait for questions, uh, on this table. So any questions on all these results so far? Yeah? So, uh, [NOISE] since you mentioned that if you had an oracle that will tell you which task it is and you have two better ways having 10 different ones. So really try training a model on like data meaning what task is interested in this particular version? We did. And so it- it confused, you know, SQuAD and- and those too the quest- the other- basically the other, uh, two types of problems that were also cast, ask question answering. So it confused those. Um, but then a lot of the others, it was able to like, very perfectly do it. But then you basically, as soon as you, uh, were to try to then build a whole model and get a decaScore, if your- if your classifier is even like 90 percent accurate, you basically multiply this by 0.9 and you get dinged so hard that it- it's not competitive anymore. So it is actually hard if you try to just build that whole system and keep adding sort of if-then else statements, uh, to make that, uh, into sort of a single system. Yeah? Have you tried telling the model what kind of task this it's doing, just giving that indicator of the kind of task quickly? I mean, in some ways, we did in this case, because we only trained each model separately on it. [inaudible] Um, only through the question. Yeah. Because I was thinking the um, maybe it's not that important that the model figure out what we want it to do in- in a practical [NOISE] application if we could just tell it what we want it to do right now? In some cases, you could tell. Uh, so the question is sort of, uh, and even in the multitask setting, you could have like an extra kind of token to say, "Now, you're doing summarization. So, and that's another input." Uh, in some ways, whether you have a summarization token, uh, or you ask what is the summary? It actually I don't think makes that big of a difference. It's just now you can query this model in very natural language rather than having to know kind of a special token to, to query the model. Uh, and we'll see actually in a couple of slides that the model is not confused, uh, when it comes to how to generate the answers. So, for every of the task, it knows very clearly how to generate the words to get to the right, to get to, you know, a reasonably accurate answer. [NOISE] Um, in the- [inaudible] does the model see all of the data and then [inaudible] that class or does it only include a [inaudible]? Oh, great question. So, how do we train, uh, the single task models? They're only trained on that dataset. So, the SQuAD number here is just a single model that has only seen SQuAD training. [NOISE] So, your point about the, um, the pointer exception for the, uh, [inaudible] generally more helpful than [inaudible]? Somewhat surprisingly, even, ah, in the case here, uh, where we had, um, this is MultiNLI, this particular model, I mean, if you just have the standard sequence to sequence, it just generates, you know, also with a softmax, uh, that label. So in that sense, it's quite similar. Uh, but yeah, it was actually better able to just point, which actually led us, uh, for a while into thinking about maybe we should have a project where we just say point to all the things and just get rid of softmax classifiers forever. Um, the problem is when you then try to do translation also, it's like okay wow, what do you point to, and then you kind of pre-train it and do some alignment and it gets kinda very large and you point to a lot of different like, you may have like- like tens of thousands of potential candidates. So we kinda discarded it as like a single unifying model for all the things, but you could point to a lot of different, like a lot of these tasks, you could actually point to and I think it's another interesting side project that could spawn from this, yeah. Just a quick question to how, how sensitive [inaudible] how sensitive, uh, the individual components [inaudible] was when you slightly perturb the relative weights of them in the loss function? So, we -- the question is, uh, how, um, sensitive were the tasks if we were to, um, add weights to the different tasks? We [NOISE] did in the optimization kind of did a lot of trickery on how to train it but we never said this task only matters like 0.5 or something. So, we didn't do that analysis. Yeah? Co-attention seems to be a burden a little bit. In some cases, yeah. Is it the [inaudible] co-attention and order but no co-attention or is that kind of like, "Oh, you already saw the test data so, like, you can't use these." I mean, these are all dep sets. Um, but it's, you could definitely do even more architecture engineering. In fact, there's this whole field which I don't think you gotten to, right, neural architecture search? Yeah. So like you can actually combine your reinforcement learning, um, and you say the action space for the reinforcement learning agent are trying to have a couple of different modules of neural nets like maybe you want to have like a CNN layer and then like a memory layer and then an LSTM layer and maybe it's bidirectional and you basically let a reinforcement learning agent figure out all of these decisions. Uh, so I think it would be phenomenal to try to apply neural architecture search not to what's usually being done which is we already know how to do image classification, we'll just do it slightly better with NAS, neural architecture search. But we actually try to find a single architecture for multi-task learning which we don't know. The problem of course is that already getting to these. All these numbers took a lot of compute time and a lot of fiddling around with stuff and it is, I can, I can only give you sort of an idea of like how often we'd say, "Oh man, we got like this really amazing result in this task but it needed this learning rate." And it turns out the same model, same set of hyperparameters everything, but this other task to get to good performance needed a much higher learning rate. And now, you try to combine those two tasks only together and you're like, "Okay, how do you choose your learning rate now?" You choose the, you know, if you choose the task, the learning rate from the task that is, you know, bigger than the smaller tasks just doesn't work well at all because it needed this higher learning rate. If you'd use the higher learning rate that the smaller task and the smaller dataset, uh, did really well on then the large one just overfits and doesn't work well either. If you try to do the average, neither of the two work. Like there's a lot of complexity in trying to do multitask learning. That's why, that's why it's such an interesting I think, uh, research challenge. All right, any more questions about this first set of results? They get, they will get better. We, we have, we have had some ideas already, uh, on, on how to improve them. All right. So, uh, how did we actually train this whole thing? Um, we had tried a lot of different things but in the end, uh, this very simple fully joint training strategy actually worked the best. Uh, and that is you basically take a mini batch from each of the different tasks and you just train on that mini batch from that task. So basically just going through all the 10 tasks and then round robin, uh, go through them. Um, now it turns out, ah, that that does not work, uh, quite as well, uh, as another training strategy and if you look into optimization, uh, strategies in neural nets, uh, there are actually a couple of papers on so-called curriculum learning, where the idea is, you start with training your model with simple pro- simple instances of your problems. So, in translation, for instance you start training with very short sentences and then you go to larger and larger, uh, sentences, uh, or longer and longer sentences. Uh, now it turns out for multi-task learning, you actually want to do the opposite. You wanna do anti-curriculum learning. Uh, and that is you start with the hardest tasks and you iterate on those for a while and then you add the simple tasks later on. And to some degree, I think this is intuitive because when you train this very gigantic and powerful model, uh, on a very simple task like sentiment and you just need to classify everything to be positive or negative. You train all of these weights and you arrive at sort of, uh, local optima that are quite deep and very specific to just generating these two words and if you then try to get out of that, out of this local optimum for that very simple task and then try to generate all these other kinds of words and point to different, you know, words it's never seen before then SQuAD, it's very very hard to come out of that local optimum. And that is sort of my intuition of why it actually makes more sense to say, "Let's start with SQuAD and machine translation and a couple of these harder tasks. We'll make the model very general purpose. It has to generate a lot of different things, create a softmax, German words, it has to point to all kinds of different words and be able to parse all kinds of different Wikipedia paragraphs." And you do that a couple of times and then once you've finished, uh, this sort of pre-training, uh, stage or anti-curriculum, then you move on and add sort of the simpler smaller tasks. So [NOISE] with that, uh, relatively simple change that did take us, uh, a lot of different experiments to get to. Um, we actually, uh, closed or, uh, um, went closer to closing that gap and now, um, we're only sort of, um, 14, uh, away. Right, yeah, uh, 14 or so. Uh, but there's still, uh, a big gap and the biggest, uh, nuisance and issue that we had was with a translation. Basically, if you look at all of these, most things are kind of similar, get slightly better, um and it's sort of a toss up but then and, and roughly similar, but translation was really bad. It's almost only half, uh, the performance in the multitask learning setup, and part of that is because translation was the only task that had a very large Softmax vocabulary of words that were in no other task. And most of the other tasks, actually were doing really well with pointing. And so, uh, my interpretation of this was that the intermediate layers, all these representations that we learned with bi-directional LSTMs and transformers, they got really, really good at being pointed to, like creating hidden representations that the answer module can point to very accurately. And then you have this one task that is like, I don't point to almost anything, I basically just generate other words and then different vocabulary. And so those hidden representations became less useful for that task. And so, that was one of the insights and that led to one of the ways of trying to improve this. Now, one of the interesting issues that we had is, when we improved the model, the multi-single model for all 10 tasks, a lot of times we said, well, but now we also have to go back and run 10 more experiments on all the single tasks to have a proper comparison, right? Because if you tune the thing you care about, and you stop tuning the thing you wanna show you can do better than, then that's not fair. Uh, so you always wanna give as much, uh, TLC and focus and experiment time to your baselines. And so, uh, in some cases we actually, uh, improved some- improved something. But then, we improve both the 10 separate models and our model, and some cases like the 10 separate models improved, even more. So the gap got even larger. It's kind of the opposite of what we wanted to show, but in general, it's better for both tests, uh, for the architecture overall. So basically, we started, uh, with this fully joint training and we have this sort of set of single models that we could, in theory with some oracle, kind of just sum up, uh, in their scores, to get a decaScore. So the gap started at 23. And then, uh, we basically did this anti-curriculum training, uh, which, uh, lowered the gap to 15. So we're kind of excited, uh, making good progress. Then we switched, uh, from GloVe and use CoVe. So contextual vectors, um, which actually increased the gap a lot again. So everything got better, but the 10 separate models got even better than the one single model that does the 10 tasks. Um, so the gap got bigger, but everybody's performance increased. So it was still overall a good thing. Uh, and then, uh, we basically figured, especially with this machine translation issue, we shouldn't just pre-train on SQuAD, but we also should include machine translation in this pre-training in the beginning so the model doesn't just start learning to point. Um, and that helped us, uh, to reduce the gap between the 10 separate models, Oracle, and the single model to about five points. And then, uh, we basically said, okay, translation is still not that good. We just keep oversampling. So, every time we go through one of these round robin mini-batch sets, we just always include machine translation. And that basically allowed us to then reduce the gap, uh, to just a single point. So now, uh, we started, uh, couple of, several months ago, uh, at 586. And now the single, uh, oracle with 10 different models, if you were to sum them up, get 618, uh, and the, you know, better contextual vectors and tuning and adding a lot more translation, and translation is still not as good as we would like it to be, uh, but now, several of the other tasks benefited a bunch. And now we're basically one decaScore away from having a single model that does as well as 10 different ones. And you can basically, you could run even more experiments, in some ways you could burn millions of dollars on AWS cost here, because most of the time we kept the hyperparameters of these different models the same. Like each of these, you could also say, well, maybe this multitask model needs to have 50 more layers, or maybe 19 more layers, or maybe five more layers and maybe they should be 1000, you know, wider in their hidden dimensions. And you could basically run a lot more experiments. Maybe hopefully, eventually, the community jointly does that, and then we can kind of move, move towards that. But we figured, okay, we're pretty close, so we moved on to some other things which maybe I'll tell you about next year. [LAUGHTER] But basically, um, let's do some analysis of what happened in this project. And this is kind of, I think something that I would encourage you all to do as well. Like you, you can chase the numbers for a while and in some ways, you should always be skeptical about your evaluations. And in some cases, you've seen- we've seen in the NLP community people like basically just optimize BLEU scores for translation for years. And then somebody came out with a paper and said, well, it turns out BLEU metrics and human evaluations on how good of a translation is this, aren't actually that correlated. And you're like, ah, that that sucks, we just spent years of our lives tuning that metric and publishing a bunch of papers. Um, and so in some ways all of these metrics have flaws, uh, you know, root scores summarization is a super, uh, subjective kind of a task. And summarization, for instance, when you analyze the errors, uh, you often realize that word vectors have problems too. So, for instance, the word vector for Jason, John, and Jeremy are all kind of the same, right? They all have similar, uh, distributions, similar contexts, windows, and so on. And so word vectors of names are very similar. And so in summarization errors, you realize, oh, well, you know, this article, news article talked about Jeremy being kidnapped. But the summary said that Jason was kidnapped. And you like, well, you know, in the evaluation metric that's just one word is off and like, all the rest is correct, but it's a pretty important word. And so, word vectors have like issues for summarization that are pretty fundamental and I don't think, uh, anybody's tackling really well right now. Uh, and so all of these metrics have issues. I would argue though that combining the 10 actually makes it less problematic and more meaningful, than looking at each one separately. Uh, because now you can't use the idiosyncrasies of one particular evaluation metric to just get like your score a little bit higher. Um, because then, if you just tune with that particular thing in mind, it will hurt some of the other tasks and you won't get to the sort of general, uh, NLP model that much more easily. All right. So now, let's do some analysis uh, of this model and, uh, look at, and this is the kinda thing that comes to one of the questions that was asked. Uh, is this model able to kind of generate the right words for the right tasks? And here, we basically looked at the distributions of how often, uh, the model generated words in these differen- with these three different mechanisms, Softmax vocabulary, context pointers, or question pointers. And, uh, as you can see, in the majority of cases it knows exactly how to generate. So, uh, for, uh, question, answering, and semantic role labeling, and SQuAD and Wiki SQL and, um, summarization, it basically uses the context pointer. So it just points into the context document. And we know for SQuAD, that is basically [NOISE] how the data set was generated. So that's the only thing that that really makes a lot of sense. Uh, what's kind of cool is that in some cases like summarization, it sometimes creates new words or, you know, that weren't in the context document wherein pointed to. Uh, and for zero-shot relation extraction, also sometimes uses, uh, this external vocabulary and in some cases the context pointer. So for the most part, uh, this model doesn't- is not confused how to execute on a task given, uh, this question formalism rather than, uh, the, uh, format of sort of this is the task, just do this particular test. Now, um, you might argue, okay, I'm not that impressed by, you know, having the performance be slightly the same with one model versus 10 separate models even though it's nice if you wanna deploy it right, like, uses less RAM and all of that, assuming they're the same size, uh, while, you know, one-tenth the size. But what I'm excited about is more like the next couple of results. And namely, sort of this transfer learning, domain adaptation, and zero-shot, uh, these kinds of capabilities. So here, uh, we chose two data sets that weren't included in the original 10. And we basically trained a pre-trained model on this versus a random model. And, uh, randomly here again, they're the same architecture, and pre-trained means the entirety of the model was pre-trained. All the, you know, encoders including the decoder in the Softmax and everything, uh, and to two other tasks where another IWSLT language pair namely, translating from English to Czech, uh, and named entity recognition tasks that you all know very well. So basically what we found is that, uh, it converges much more quickly, uh, in the beginning, uh, and then, there's still a significant but not gigantic gap. So this pre-training on these completely separate kinds of task had helped. And, uh, I think that's, that's pretty exciting, um, especially sort of the quicker convergence, like, learning more quickly, uh, whatever new task you, you come up with, which also means in some cases you can get away with less training data on these new- on these new tasks. Uh, now domain adaptation is kind of the simpler form of transfer learning, where you basically just have a different, uh, type of, uh, you know, distribution for your words. Uh, we mentioned we have the Stanford Sentiment Treebank for sentiment analysis. Uh, and then we analyze this on different, uh, sentiment data sets, namely Amazon product reviews and Yelp restaurant reviews, and out of the box without any training, the model just got 80% accuracy on both of those data sets. Uh, and I think for practitioners, that is pretty exciting because you basically didn't have to train anything, it just kind of worked out of the box, download it from GitHub, and run it. Uh, SNLI, that was slightly different. It didn't quite work as well. It's another natural language inference data set, but has very different- a very different distribution, different, uh, kinds of domains, uh, that, uh, these entailment questions are asked over. Uh, and here, out of the box it achieved 62. Uh, but then, uh, once you fine tuned it and similar to these experiments here continue to actually train on this data set, it quickly uh, converged to 87 which was still two percent gain over a randomlyor initialized McCann model. Yeah. In that experiment, did you evaluate how much less data you can get away with? Did we evaluate how much less data we can get away with? We didn't. And in some ways, whenever you would run this experiment, you'd basically be like, you'd still not do as well. Like, everything- all these models will still do better with more training data. So you just kind of, it would be a fuzzy kind of say, like, cut- fuzzy sort of result, right? Where you say, well, with one-tenth we might get to 50 and the other model might get only to 40, doing something like that. Um, we don't- I don't have those numbers. It would be kind of actually also a neat, neat, uh, analysis to do. Yeah. So if you wanted to like train on a new task [inaudible]. Yeah. [inaudible] . So, do we have the code to train a new task? Yes, we do. Um, you can just, uh, edit, make it into this format using context. Here's a question, simple like CSV type format, and then you add it and you can both like train the pre-trained model yourself. You can download a pre-trained model and just add it. So I'll look it up, yeah. Do you know how this compares to using other kinds of pre-trained representations like, say BERT? So, um, it's a great question. So how does this compare to other pre-trained representations like BERT? So, in some ways, people say BERT is kind of this model that does everything, but when you actually read the paper, you realize, well, it's a separate model for these different tasks, right? If you wanted to have a classification task, you have a little token in the beginning, and you have a different top layer. If you wanna do a sequence labeling task, you have a different top layer. If you wanted to do a sequence extraction task, you have a different top layer. So, BERT isn't actually a single model for all of these different tasks. Ah, and then, on all the results, there's a lot of extra tuning for each of the data sets, and tasks, uh, that, you know, different learning rate for this task, uh, different size, or different sets of BERT, and so on. So, we're also super excited, we're like maybe this is it, we'll just run everything on BERT, and then we looked into all the details, and there's so much excitement in the beginning. And then the more we dug through the details, the less excited we became as this being like sort of the answer, because it is not a single model. Uh, in some ways, it's probably better to- for pre-training. So instead of CoVe, you can have kind of BERT at the very beginning, and my hunch is everything will get slightly better, but you still need to have, um, a lot of the- a lot of the other sort of modeling architecture on top of it. Uh, and then the sad thing is to really get the state of the art results, there's a lot of very spec- task-specific tuning of those last top layers. So, if you try to unify that task-specific tuning, you lose a lot of the good performance of BERT. Um, so, unfortunately, it's not quite the sort of, "Oh, just use BERT for it, and you'll just have state-of-the-art numbers and all the things." Um, I could probably go like talk about it a lot more, but, uh, I think it still makes sense to think about, um, some of the ideas from BERT, like basically, add as one of the tasks language modeling. That would be very likely the task that helps the most for all the other tasks, and we should include that, uh, it also would be nice to have a faster model right now. Um, it's hard to do language modeling is very, very large, it benefits even more from, you know, billions and billions of words. It's hard to train the McCann model, this current question answering model of the co-attention mechanism of the question with like an increasingly large context. So you'd have to kind of split it also like BERT, works also reasonably well only for like at most I think 500 words or so, and if you wanted to do summarization you'd basically have to cut the original document to only 500 words, and then try to summarize it. So, there are a lot of like devil in the details that they didn't have to figure out, because they said, "Well, we'll just sort of just like word vectors, we can take them in, and then we do a lot of other stuff that is task-specific, um, with those- those word vectors, or with the BERT architecture." I still- I don't want to- this BERT is obviously amazing, and we are looking into trying to use ideas from it. But unfortunately, it wasn't just sort of a silver bullet to solve multi-task learning. Mm-hmm? Pre-training process to be considered, uh, prioritized sampling based off of how much fewer group, how much loss there is? Sorry, did we- say again? Would you consider prioritizing sampling [inaudible]? So, did we consider prioritizing the sampling? So in some ways with this pre-trained strategy here, um, that's kind of what we did by basically focusing on these really hard tasks. And, uh, a lot of like the gap in the end was improved by really waiting for, like four of the tasks at the very end, uh, bef- unti- you know, uh, until after you're gone through, uh, sort of oversampling all of these, uh, really hard tasks. In the last 10 minutes, uh, basically, uh, th- the most exciting thing, uh, for- for last though I think you could also do a lot more work in this direction. Uh, I mentioned the sole question pointer and zero short learning in the beginning, and, uh, we basically just tried to play around with that a little bit, um, and found that in some cases, it actually kind of magically works. Uh, so here, we tried, uh, a sentence John had a party, but no one came, and he was all alone. And then we asked, "Is this story sad, or happy?" And while the model could've, you know, generate some random German words, or some random SQL words, or it's just said whatever, it actually pointed to, of all the words, you could've pointed to in the context or the question that pointed to "Sad", which is pretty cool. Like- and it's just one small sample, and, you know, you could do a lot more, you could try to come up with a very large zero-shot kind of classification data set, which is actually kind of hard too. You have to be quite creative, it's not like you can just say, "Oh, it would just take all these reviews, and label them as these, you know, positive negative. Ah, but so, I think we- we need to do more work in that direction. Somebody will hopefully create a zero-shot kind of task data set, that is not just zero-shot for, you know, kind of new distributions or something with completely different, uh, outputs. Uh, but we- we tried a couple, and it doesn't always work, right. You can be adversarial about it, you can make this basically looks most similar to, is the sentiment positive or negative? Uh, is this sen- is this sentence positive or negative? That was the formalism we had for sentiment analysis. And so you could, if you make the question more and more different, eventually, it'll kinda get tripped up. Ah, and it's clear that it's benefited, uh, from the word vectors, of sad being closer to negative, and then understanding sort of through all these, uh, correlations, and- and, uh, deep representations that there are other sort of sad words in this context, or- or whatever it is. Uh, and so, it was able to point to this. But you can be adversarial, it doesn't always work. But even the fact that, uh, it was sort of zero-shot classification based on word vectors, uh, for new kinds of questions, uh, personally, it was very exciting to me. And we tried a couple of other things like, uh, Bryan gave a talk and nobody clapped. Was Bryan happy, or sad? And it also got it right. So, um, there are a couple- a couple of the, the examples were, were at least as happy or sad thing worked. And then, uh, a couple of other sort of adjective questions that we, we tried but, um, what I'm- what I would be most excited about is eventually actually trying to have a zero-shot classification task, uh, that combines the different tasks too. So, uh, unfortunately, there's no data set for that, so we didn't train it, so it doesn't happen with the model. But in theory, if you ask what is the sum- you can summarize, and you can translate from English into German, why couldn't you ask the model for a German summary? And if that worked, eventually, that would be even more amazing, but it, it doesn't work right now, because we never ask it sort of for these compositional task- these compositional task questions. But is yet another interesting line of research that I think could spawn from this. Uh, all right. So, I hope I could show you that this sort of decaNLP framework is an interesting new benchmark for generalized NLP. Uh, I do think it's a reasonably good framework for tackling a bunch of the really hard questions in the field. Uh, more general language understanding, and question answering of course, uh, multitask learning, domain adaptation, uh, which we sort of analyzed a little bit with the sentiment, and SNLI versus multi NLI, um, transfer learning, and then weight sharing. I think it's clear, everybody loves weight sharing, you wanna share as many weights as possible. Uh, word vector started at, uh, ELMo, CoVe, and now BERT basically share more and more, deeper and deeper layers. It would be great if we can unify that last bit also, uh, and then share basically the entirety of the networks, and then eventually hopefully get to zero-shot learning. Now, there's a bunch of related work. The original paper has over 100, um, citations in it, uh, of, of, you know, papers to other, other, um, lines of, uh, work. But, uh, this is actually zero- at least some of the models and papers that influenced us the most, uh, in, in our thinking and modelling. Uh, one of them actually comes from, uh, the two instructors of the class. And so, um, hopefully, uh, we can, you know, sort of think about what- what's next after all this architecture engineering. And, uh, I think one potential answer to that, uh, is single multitask learning for more generalized NLP models. [NOISE] All right. Thank you. [APPLAUSE]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_14_Transformers_and_SelfAttention.txt
Okay. So I'm delighted to introduce, um, our first lot of invited speakers. And so we're gonna have two invited speakers, um, today. So starting off, um, we go and have Ashish Vaswani who's gonna be talking about self attention for generative models and in particular, um, we'll introduce some of the work on transformers that he is well-known for along with his colleagues. Um and then as a sort of, um, a special edition then we're also going to have Anna Huang talking about some applications of this work. There are actually at least a couple of people in the class who are actually interested in music applications. So this will be your one chance in the course to see music applications of deep learning. Okay, um, so I'll hand it over to Ashish. Thanks, Chris and, uh, thanks, Evie. Uh, Anna is actually here to make the class less dull. So [LAUGHTER] she's the highlight on this one. So uh, so, uh, hi everyone. Um, um, uh excited to be here. This is a very large class. Uh, first invited speaker, no pressure, so hopefully this will all go well. Uh, so yes, so the talk is going to be about, uh, self attention. Um, and so the purpose is, is not going to be just to talk about a particular model, but, as, as, as, as empiricists and, and, like, well, I'm an empiricist and I consume machine learning to apply it to various tasks. And, and, and, well, starting point always is to ask this question, you know, what are the- what's the structure in my dataset or what are the symmetries in my dataset, and is there a model that exists that that's a very good- that, that has the inductive biases to model these properties that exist in my dataset. So hopefully, over the course of this, uh, this, this lecture Anna and I will convince you that, uh, self attention indeed does have some- has the ability models and inductive biases that potentially could be useful for the problems that you care about. Um, so, um, this talk is going to be our learning representations primarily of, uh, variable length data where we have images but, uh, most of it is going to be variable length data. And, uh, and, and, and all of us care about this problem because we- in deep learning, and deep learning is all about representation learning. And if- and building the right tools for learning representations as, as, as, as sort of- is an important factor in, in achieving empirical success. Um, now, uh, the models of choice, the primary workhorse for perhaps even now and or up to this point had been recurrent neural networks. Um, um, how, how many people here are familiar with RNNs? [LAUGHTER] Okay. So definitely up to this point, the primary workhorse have been recurrent neural networks, and some of the more, uh, some, uh, some gated variants that explicitly add multiplicative interactions like LSTMs, they also, they also have mechanisms that allow for better gradient transfer. And some recent variants like gated, uh, recurrent units that are simplification, they're kind of the- they're- they dominate this, this recurrent landscape. Um, and typically how did recurrent neural networks, uh, learn or, um, produce representations? They consume a string or a sentence, um, even an image, imagine, you know, in a particular- in sequentially and, uh, at each, at each, uh, position, at each timestep they produce, they produce a, a continuous representation that's summarization of, of everything that they've actually crunched through. Um, now, so in, in, in the, in the realm of large data, uh, par- having parallel models is, is quite, is quite beneficial. In fact, I was actually reading Oliver Selfridge. Uh, he was a, he was a professor at MIT and, uh, he had this, uh, sorry, he wrote the precursor to deep nets its it's called Pandemoniums. I would recommend everybody to read it. And he has this fascinating note that, you know, if you give me more parallel computation, I'll just add more data and make it slower. So you can consume more data. Um, and, and recurrence, uh, recurrence sort of just by construction, um, limits parallelization because you have to, you have to wait until- your wait un- for a particular time point to produce a representation. Um, but if there's any questions, please raise your hands, I'll hopefully look around and, and, uh, be able to attend to your question. Um, and again, and, and now because we're actually producing these representations, we're sort of summarizing, you know, if you want to pass information, if you want to pass co-reference information, then we kind of have to shove all of this inside this fixed size vector, so it could potentially be difficult to model. And, uh, while they have been successful in language, uh, explicit they don't have- the architecture doesn't have a very clear explicit way to model hierarchy which is, which is something that's very important in language. Um, now, um, so they have been devin- it has been excellent work of, a precursor to self attention that actually surmounted some of these difficulties. And what were these difficulties basically is a convolutional sequence models where you have these limited receptive field convolutions that, again, consumed the sentence now not, not sequentially but in depth. And they produce representations for every- they produce representations of your variable length sequences. Um, and, uh, they're trivial to parallelize because you can apply these convolutions simultaneously at every position. Each layer is trivial to parallelize. Uh, the, the, the serial dependencies are only in the number of layers. Um, you can get, uh, you can- you can get these local dependencies efficiently because that a single application of a convolution can consume all the information inside its local receptive field. Um, now if you want to have these really long distance interactions while you don't have to pass through a linear number of steps, you still because these, because these receptive fields are local you might need something like linear and depth or logarithmic if you're doing something like dilated convolutions. So there's still need- the number of layers that are needed are still a function of the length of the of, of your string. Uh, but they're a great development and they actually pushed a lot of research like WaveRNN, for example, is a classic sort of success story of convolutio- convolutional sequence models even by net. Um, now, so far attention has been like one of the most important components, the sort of content-based, you know, memory retrieval mechanism. And it's content-based because you have your decoder that attends to all this content, that's your encoder and then just sort of decides what to wha- what, what information to absorb based on how similar this content is to every position in the memory. So this has been a very critical mechanism in, uh, in neural machine translation. So now the question that we asked was, like, why, why not just use attention for representations and, uh, now here's what sort of a rough framework of this, this representation mechanism would look like, uh, the way- just sort of repeating what attention is essentially. Now imagine you have- you want to represent the word, re-represent the word representing, you want to construct its new representation. And then first, uh, you, you attend or you, you compare yourself, you compare your content, and in the beginning it could just be a word embedding. Your compare content with all your words, and with all, with all the embeddings and based on these, based on these compatibilities or these comparisons, you produce, uh, you produce a weighted combination of your entire neighborhood, and based on that weighted combination you, you summarize all that information. So it's, like, you're re-expressing yourself in certain terms of a weighted combination of your entire neighborhood. That's what attention does, and you can add feed-forward layers to basically sort of compute new features for you. Um, now, um so the first part is going to be about how, like, some of the properties of self attention actually help us in text generation, like, what inductive biases are actually useful, and we empirically showed that indeed they, they move the needle in text generation. And this is going to be about machine translation, but there were other work also that we'll talk about later. So [NOISE] now with this, uh, with this sort of, uh, with this attention mechanism you get this- we get a constant path length. So all pairs or a word can in- position can interact with any position, every position simultaneously. Um, hopefully if the number of positions is not too many. Uh, attention just by virtue of, like, it's a construction, you have a softmax, you have these gating and multiplicative interactions. And again, I'm not gonna be able to explain why, but it's, it's interesting, like, you've seen these models, like, even, even the, uh, even Pixel, PixelCNN, uh, or, um, when it was actually modeling images, they explicitly had to add these multiplicative interactions inside the model to, to basically beat RNNs, and attention just by construction gets this because you're, you're multiplying the attention probabilities with your, with your activations. It's trivial to parallelize, why? Because you can just do attention with matmuls, especially the variant that we use in our paper, uh, in our work. And, uh, so now the question is convolutional sequence to- convolutional sequence models have been very successful in, in, in, in ge- generative tasks for text. Can we actually do the same or achieved the same with, uh, with, uh, attention as our primary workhorse for representation learning. Um, so just to sort of add some context and there's been some, there's been some- up to- up to the transformer there have been a lot of great work on using self attention primarily for classification within. There was, there was work on self attention within the confines of, like, recurrent neural networks. Um, perhaps the closest to us is the, is the memory networks, uh, by Weston, Sukhbaatar, where they actually had a version of recurrent attention, but they didn't have, uh, but they didn't actually- empirically, they didn't show it to work on sort of conditional modeling, like, uh, translation and their mechanism was, uh, like, they were using sort of a fixed- they were using a fixed query at every step. So there's- it, it leaves something to be desired. They still had this question, is it actually going to work, um, on, on, on large scale machine translation systems or large-scale text generation systems. So this is sort of the, the culmination of, um, of the, the self attention, our self attention work. This is the tran- the- and we put it together in the transformer model. And, uh, so how does this look like? So we're going to use attention pri- we're going to use attention primarily for computing representations so- of your input. Imagine you're doing English to German translation. So you have your words, and notice that, uh, attention is, uh, permutation invariant. So you just change the order of your positions. You change the order of your words and, and, uh, it's not going to affect the actual output. So in ord- in order to maintain order we add, we add position representations. And, uh, there's two kinds that we tried in the paper, these, these fantastic sinusoids with no entropy invented. And we also use learned representations which are very plain vanilla both of them work equally well. Um, and, uh, so, so first we have- so the encoder looks as follows, right? So we have a self attention layer that just recomputes the representation, uh, for every position simultaneously using attention, then we have a feed-forward layer. And we also have residual, residual connections and I'll, I'll sort of give you a glimpse of what these residual connections might be bringing that is between every, every layer, and the input we have a skip connection that just adds the activations. Uh, and then this tuple of, uh, self attention and feed-forward layer just essentially repeats. Now, on the decoder side, uh, we've- we, we have a sort of standard encoder decoder architecture. On the decoder side, we mimic a language model using self attention, and the way to mimic a language model using self attention is to impose causality by just masking out the positions that you can look at. So basically, uh, the first position it's- it can't look forward, it's illegal to look forward. It can look at itself because we actually shift the input. Um, so it's not copying, uh. It's kind of surprising that parti- with these models, it's very easy to copy at one point, when early on it was even harder to ge- you know, do copying with recurrent models. But now, at least, you can copy really well, which is a positive sign, I think overall. Um, but, uh, so now on the decoder side, uh, we have, uh, we have this causal self attention layer followed by encoder-decoder attention, where we actually attend to the, uh, last layer of the encoder and a feed-forward layer, and this tripled, repeats a mul- a few times, and at the end we have the standard cross-entropy loss. Um, and, um, so, um, sort of, staring at the- at, at our parti- at the particular variant of the self- of the attention mechanis- mechanism that we use, we went for both- we went for simplicity and speed. So, um, so how do you actually compute attention? So imagine you want to re-represent the position e2. And, uh, we're going to first linearly, linearly transform it into, uh, a query, and then we're gonna linearly transform every position in your neighborhood or let's say every position at the input because this is the, uh, uh, the encoder side, to, uh, a key. And these linear transformations can actually be thought as features, and I'll talk more about it later on. So it's like- it's, it's basically a bilinear form. You're projecting these vectors into a space where dot product is a good- where just a dot product is a good proxy for similarity. Okay? So now, you have your logit, so you just do a so- softmax computer convex combination. And now based on this convex combination, you're going to then re-express e2 or in terms of this convex combination of all the vectors of all these positions. And before doing- before doing the convex combination, we again do a linear transformation to produce values. And then we do a second linear transformation just to mix this information and pass it through a- pass it through a feedforward layer. And this is- um, and all of this can be expressed basically in two- in two- in two-matrix multiplications, and the square root factor is just to make sure that these, these dot products don't blow up. It's just a scaling factor. And, uh, and, and, wha- why is this particular- why is this mechanism attractive? Well, it's just really fast. You can do this very quickly on a GPU, and simul- you can do it simultaneously for all positions with just two matmuls and a softmax. Um, on the decoder side it's, it's exactly the same, except we impose causality by just adding 10 e- minus 10 e9 to the logits. So it basi- it's just- you just get zero probabilities on those positions. So we just impose causality by, by adding these, uh, highly negative values on the attention- on the attention logits. Um, is, is everything- [LAUGHTER] I thought that was a question. So, um, [LAUGHTER] okay so attention is really, uh, attention is cheap. So because it's- because this variant of attention just involve two- involves two matrix multiplications, it's quadratic in the length of your sequence. And now what's the computational profile of RNNs or convolutions? They're quadratic in the dimension. Because, basically, you can just think of a convolution just flattening your input or just applying a linear transformation on top of it, right? So- and when does this actually become very attractive? This becomes very, very attractive when your dimension is, uh, much larger than your length. Which is the case for machine translation. Now, we will talk about cases when there's- when the- when this is not true, and we have to- we have to do a- we have to make other model developments. Um, but, uh, but for short sequences or sequences where your length does- where your dimension dominates length, attention is a very- has a very favorable computation profile. And as you can see, it's about four times faster than an RNN. Um, um, and, and faster than a convolutional model where the- you have a kernel of- like filter with, uh, three. So, so there's still one problem. Now, here's something- so in language, typically, we want to know, like, who did what to whom, right? So now, imagine you applied a convolutional filter. Because you actually have different linear transformations based on let- relative distances, like this, this, this, this, linear transformation on the word who, uh, o- o- on the concept, we can have- can learn this concept of who and, and, and, pick out different information from this embedding of the word I. And this linear transformation, the lre- the red linear transformation can pick out different information from kicked and the blue linear transformation can pick out different, different information from ball. Now, when you have a single attention layer, this is difficult. Because all- because they're just a convex combination where you have the same linear transformation everywhere. All that's available to you is just a- is just mixing proportions. So you can't pick out different pieces of information from different places. Well, what if we had one attention layer for who? So you can think of an attention layer as something like a feature detector almost, like, because a particular- it, it might try to- it might- because it carries with it a linear transformation, so it's projecting them in a space that- which starts caring maybe about syntax, or it's projecting in this space which starts caring about who or what. Uh, then we can have another attention layer for or attention head for what, did what, and other- another attention head for, for, for whom- to whom. And all of this can actually be done in parallel, and that's actually- and that's exactly what we do. And for efficiency, instead of actually having these dimensions operating in a large space, we just- we just reduce the dimensionality of all these heads and we operate these attention layers in parallel, sort of bridging the gap. Now, here's a, uh, perhaps, well, here's a little quiz. I mean, can you actually- is there a combination of heads or is there a configuration in which you can, actually, exactly simulate a convolution probably with more parameters? I think there should be a simple way to show that if you had mo- more heads or heads are a function of positions, you could probably just simulate a convolution, but- although with a lot of parameters. Uh, so it can- in, in, in the limit, it can actually simulate a convolution. Uh, and it also- we can al- we can continue to enjoy the benefits of parallelism, but we did increase the number of softmaxes because each head then carries with it a softmax. But the amount of FLOPS didn't change because we- instead of actually having these heads operating in very large dimensions, they're operating in very small dimensions. Um, so, uh, when we applied this on, on, on machine translation, um, we were able to drama- uh, dramatically outperform, uh, previous results on English-German and English-French translation. So we had a pretty standard setup: 32,000-word vocabularies, WordPiece encodings, WMT14-, uh, WMT 2014, uh, was our test set, 2013 did the dev set. And, uh, and some of these results were much stronger than even our previous ensemble models. And, um, and on English-French also, we had some- we had some very favorabl- favorable results. Uh, and we- and we are, we, we, we achieved state of the art. Now, ste- stepping back a bit, uh, I- I'm not claiming that we, we arrived at an architecture that has better expressivity than an LSTM. I mean, there's, there's, there's, there's theorems that are- that say that LSTMs can model any function. Um, perhaps, all we did was just build an architecture that was good for SGD. Because stochastic gradient descent could just train this architecture really well, because the gradient dynamics and attention are very simple attentions, just a linear combination. And, uh, um, I think that's- I, I think that's actually favorable. But hopefully, uh, as we- as we go on, but the- well, I'd, I'd also like to point out that, you know, we do explicit mo- we do explicitly model all, all path connection, all, all, all pairwise connections and it has its adva- advantage of a very clear modeling, very clear relationships directly between, between any two words. Um, and, like, hopefully we'll be able to also show that there are other inductive biases. That it's not just like building more architectures that, that are good for- that are good inductive biases for SGD. So frameworks, a lot of our work was initially pushed out in tensor2tensor. Maybe that might change in the future with the arrival of JAX. There's ano- there's a framework also from Amazon called Sockeye. There's also Fairseq, uh, the se- the convolutional sequence-to-sequence toolkit from Facebook that the, they prob- I'm actually not sure if it has a transformer implementation, but they have some really good sequence-to-sequence models as well. Um, okay. So the importance of residuals. So, uh, we have these resil- residual connections, uh, between, um, so we have these residual connections that go from here to- here to here, here to here, like between every pair of layers, and it's interesting. So we, um, we- so what we do is we just add the position informations at the input to the model. And, uh, we don't infuse- we don't infuse or we don't inject position information at every layer. So when, uh, we severed these residual connections and we loo- stared at these, uh, stared at these attention distributions, this is the center or, sort of, the middle map is this attention distribution. You actually- basically, it- it's been unable to pick this diagonal. It should have a very strong diagonal focus. And so what has happened was these residuals were carrying this position information to every layer. And because these subsequent layers had no notion of position, they were fi- finding it hard to actually attend. This is the encoder-decoder attention which typically ends up being diagonal. Now, so then we, uh, we said okay. So then we actually continued with- continued to sever the residuals, but we added position information back in at every layer. We injected position information back in. And we didn't recover the accuracy, but we did get some of this, sort of, diagonal focus back in. So the residuals are doing more, but they're certainly, definitely moving this position information to the model there. They're pumping this position information through the model. Um, okay. So, so that was- that was- so, so now we saw that, you know, being able to, sort of, model both long- and short-, short-term relationships, uh, sh- uh, long and, long- and short-distance relationships with, with attention is beneficial for, for text generation. Um, what kind of inductive, inductive biases lay- actually, uh, appear, or what, what kind of phenomena appear in images and something that we constantly see- constantly see in images and music is this notion of repeating structure that's very similar to each other? You have these motifs that repeat in, in different scales. So, for example, there's a b- it's another artificial but beautiful example of self-similarity where you have this Van Gogh painting where this texture or these, these little objects just repeat. These images are- these different pieces of the image are very sa- similar to each other, but they might have different scales. Uh, again in music, here's a motif that repeats, uh, that could have- it could have, like, di- various, like, spans of time between in, in, between it. So, um, so, so this, so we, we, we, we attempted after this to see, well, to ask this question: can self-attention help us in modeling other objects like images? So the, the path we took was, sort of, standard auto-regressive image modeling the- or probabilistic image modeling, not GANs. Because it was- well, one, it was very easy. We had a language model almost. So this is just like language modeling on images. Uh, and also training at maximum, likely, it allows you to, sort of, measure, measure how well you're doing on, uh, on, on your held-out set. Uh, and it also gives you diversity, so you hopefully are covering all possible, uh, different kinds of images you- So, um, and to this point there's al- we had an advantage that's also been- there are- there've been good work on using recurrent models like PixelRNN and PixelCNN, that, that we're actually getting some very good compression rates. Um- And, um, again here, originally the argument was that, well, you know, in images because there- because you want symmetry, because you want like if you have a face, you want, you want one ear to sort of match with the other. If you had a large receptive field, which you could potentially get with attention at a lower computational cost, then it should benefit- then it should be quite beneficial for, for images, for images and you wouldn't need many layers like you do in convolutions to actually get dependencies between these far away pixels. So it seem like self-attention would have been a- what, what, what was already a good computational mechanism, right? But this sort of- but it was actually interesting to see how it even modeled- naturally modeled self-similarity, and people have used self-similarity in image generation like, you know, uh, there's this really cool work by Efros where they actually see, okay, in the training set, what are those patches that are really, that are really similar to me? And based on the patches that are really similar to me, I'm going to fill up the information. So it's like actually doing image generation. Uh, there is this really classic work called non-local means where they do image denoising, where they want to denoise this sort of, this patch P. And they say, I'm going to- based on my similarity between all other patches in my image, I'm going to compute some function of content-based similarity, and based on the similarity I'm going to pull information. So as- and exploiting this fact that images are very self-similar. And, uh, uh, this has also been sort of, uh, applied in some recent work. Now if you just took this encoder self-attention mechanism and just replace these word embeddings with patches, and that's kind of exactly what it's doing. It's, it's computing this notion of content-based similarity between these elements and then based on this content-based similarity, it constructs a convex combination that essentially brings these things together. So it's, it's a very ni- it was, it was quite- it was very pleasant to see that, oh, this is a differentiable way of doing non-local means. And, uh, and we took the transformer architecture and replaced words with pixels. Uh, there was some- there were some architecture adjustments to do. And, uh, so this was but- this was basically the kind of- it was very similar to the original work, and here the position representations instead of being, you know, one-dimensional, they were- because we are not dealing with sequences, we have two-dimensional position representations. Um, okay. So I pointed out before, attention is a very com- very favorable computational profile if your length- if your dimension dominates length, which if- which is absolutely untrue for, absolutely untrue for images. Uh, because even for like 32 by- even for 32 by 32 images, when you flatten them and you- and you flatten them, you have 30- you get 30, 72 positions, uh, so it's your standard CFIR image. Um, so simple solution, uh, because like convolutions of- I mean, you get- convolutions are basically looked at local windows and you get translational equivariance. We said, "Okay. Let's adopt the same strategy." And also there's a lot of spatial locality and images. Uh, but now, we will still have a better computational profile. If your- if your receptive field is still smaller than your dimension, you can afford- you can actually still do much more long distance computation than a standard convolution because you're, uh, because you're quadratic in length. So as long as we didn't increase our length beyond the dimension, we still had a favorable computational profile. And so the way we did it was, uh, we essentially had, uh, two kinds of rasterizations. So we had a one-dimensional rasterization where you had a sort of single query block, uh, which was, uh, which was then attending or to the- into a larger memory block, uh, in this rasterized fashion along the- along, along the rows. Um, then we tried another form of rasterization, falling standard two-dimensional locality, where you had- where we actually produced the image in, uh, in blocks and within each block we had a rasterization scheme. Um, again, these- the image transformer layer was very similar. We had two-dimensional position representations along with query- with the same- with a very similar attention mechanism. Um, and we tried both super-resolution and unconditional and conditional image generation. Uh, this is- this is Ne- Niki Parmar, I and a co- and a few other authors from Brain, um, and we presented it at ICML. And, uh, we were able to achieve better perplexity than existing models. So PixelSNAIL is actually another model that used- mixed both convolutions and self-attention and they- they outperformed us on, on, on, on, on, bits per dimension. So we were measuring perplexity because these are probabilistic- these are probabilistic models. It's like basically a language model of images and, and it just- and your- and the factorization of your language model just depends on how you rasterize. In the- in this- in the one-D rasterization, we went first rows and then columns. In the two-D rasterization, we went blockwise and inside each block we rasterized. On ImageNet, we achieved better perplexities, and, uh, so yeah, I mean we're at a GAN level, right? I mean this weird- this is- I think probabilist auto-regressive Image generation, uh, by this point had not reached GANs. At ICLR 2019, there's a paper by Nal that actually uses self-attention and gets very, very good quality images. But what we, what we observed was, we were getting structured objects fairly well. Like can people recognize what the second row is? Cars. [OVERLAPPING] I heard- I said- most- almost everyone said cars. I'm not going to ask who said something else, but yes, they're cars. yeah. And, uh, so the- and the last row is another vehicles like, uh, so essentially when structured jo- structured objects were easy to capture. Um, like frogs and sort of, you know, objects that were camouflaged just turned into this mush. Um, and- but on super resolution, now super-resolution is interesting because there's a lot of conditioning information, right? And, uh, when you have a lot of conditioning information, the, the sort of possible- you break- you, you actually lock quite a few of the modes. So there's only a few options you can have at the output. And super- our super resolution results are much better. We were able to get better facial orientation and structure than previous work. And these are samples at different temperatures and, uh, and, uh, and we wou- when we quantify this with actual human evaluators, we- like we flash an image and said, is this real, is this false? And we were able to, uh, we were able to fool humans like four times better than previous results in super resolution. Again, these are not- these results like I, I guess the, the latest GAN result from Nvidia makes us look like a joke. But, I mean this is, I mean, we're starting later than GAN. So hopefully we'll catch up. But, but the point here is that this is an interesting inductive bias for images, so very natural inductive bias for images. Um, and, uh, and, and there is hope to apply it- for applying in classification and other such tasks also. Um, so one interesting thing, just to sort of both out of curiosity and asking how good is maximum or like does maximum likelihood. Well, one, does the model actually capture some interesting structure in the role? Second, do you get diversity? Well, maximum likelihood should get diversity, by, by virtue, by virtue of what it does. Uh, so then we just- we did image completion. And why is- why image completion because as soon as you lock down half the image to the goal truth, you're actually shaving off a lot of the possible modes. So you have a much easier time sampling. So, uh, so the first is, uh, first is what we supply to the model. The, the, the right row- the right most column is, is gold, and we were able to generate different samples. But what was really interesting is the third row. Uh, so the rightmost column is- the rightmost column is gold. Uh, now if you look at the third row, this horse. So actually there's this sort of glimpse or a suggestion of a pull, but the model hallucinated a human in some of these, in some of these images, which is interesting like in- it does capture at least the data teaches it to capture some structure about the world. Um, the dog is just cute and I guess it also shows that, you know, there was this entire object, this chair, that the model just completely refused to imagine. So there's a lot of difficulty. And I guess Anna is gonna talk about [NOISE] the another way to exploit self- self-similarity. Thank you. [APPLAUSE] So thank you Ashish for the introduction. Uh, so there's a lot of self-similarity in images. There's also a lot of self-similarity in, in music. So we can imagine, transformer being a, a good model for it. Uh, we- we're going to show how, uh, we can add more to, to the self attention, to think more about kind of relational information and how that could help, uh, music generation. [NOISE] So, uh, first I want to clarify what is the raw representation that we're working with right now. So analogous to language, you can think about there's text and somebody is reading out a text, so they add their kind of own intonations to it, and then you have sound waves coming out of that speech. So for music there's a va- very similar kind of, uh, line of a generation where you say the composer has an idea, uh, writes down the score and then, a performer performs it and then you get sound. So what we're going to focus on today is mostly, uh, you can think of the score but it's actually, er, a performance, um, in that it's a symbolic representation where MIDI pianos were used and, uh, um, professional amateur, uh, musicians were performing on the pianos. So we have the recorded, uh, information of their playing. So in particular, um, at each time se- step modeling music as this sequential, uh, process, what is being output are, okay, turn this note on, ah, advance the clock by this much, and then turn this note off. And also there is, uh, dynamics information, so when you turn the note on, you first say like, how loud it's going to be. Uh, so traditionally, uh, modeling, uh, music as kind of a language, we've been using, uh, recurrent neural networks. And, um, because as Ashish introduced and, and talked about, there is a lot of compression that needs to happen, like a long sequence has to be embedded into like a fixed length vector. And that becomes hard when, uh, in music you have- you have repetition coming, um, at a distance. So, uh, I'm first going to show you, um, samples from, from the RNNs, from a transformer and then from a music transformer that has the relative attention and kind of let you hear the differences and then I'll go into how we, uh, what are, what are the, uh, modifications we needed to do on top of the, uh, transformer model. Uh, so here, uh, this task is kind of the image completion task. So we give it an initial motif and then we ask the model to do continuations. So this is the motif that we fed. [MUSIC] How many people recognize that? Awesome. Okay. [LAUGHTER] Yeah, so this is a, uh, kind of a fragment from a Chopin Etude piece. And we're going to ask, uh, the RNN to do a continuation. [NOISE] [MUSIC] So in here, like in the beginning, it was trying to repeat it. But very fast, it, er, wandered off into, its other different ideas. So that's one challenge because it's, uh, not able to directly look back to what happened in the past, uh, and, and can just look at kind of a blu- blurry version, and that blurry version becomes more and more blurry. Uh, so this is what the transformer does. Uh, so so, uh, a detail is, uh, these models are trained on half the length that you're hearing. So we're kinda asking the model to generalize beyond the length that it's trained on. And you can see for this transformer, it, it deteriorates beyond that. But it can hold the motif pretty consistent. [MUSIC] Okay. You, you, you ge- you get the idea. [LAUGHTER] So initially, it was able to do this repetition really well. Uh, so it was able to copy it very well. But beyond the length that was trained on, it kinda didn't know how to cope with, like longer contexts. And, uh, what you see, uh, the, the last one is from the music transformer. I think so that kind of [NOISE] the relational information. And you can just see visually how it's very consistent and kinda repeating these [NOISE] these larger, uh, arcs. [MUSIC] Yeah. So that was, uh, music transformer. And so in music, the, the self similarity that we talked about, uh, so we see, uh, the motif here, and so, so there we primed the model with a motif, and this is actually a sample, unconditioned sample from the model. So nothing, er, there was no priming that the, uh, model kinda had to create its own motif and then, uh, do, uh, continuations from there. And here, uh, if we kinda look at it and analyze it a bit, you see, uh, a lot of repetition, uh, with gaps in between. And if you look at the self attention structure, we actually do see the model, uh, looking at the relevant parts. Even if, if it was not immediately, uh, preceding it. So, so here, uh, what I colored shaded out is where the motif, um, occurs. Uh, and you can, uh, see the different colors, there's a different attention heads and they're kinda focusing, uh, among those, uh, grayed out sections. [NOISE] So I'll play the sample and we also have a visualization that kind of shows you as the music is pa- uh, is being played or what notes it was attending to as it was predicting that note. And, uh, this was generated from scratch. And, uh, so the self attention is, um, from, from kind of note to note level or event to event level. So it's, it's quite low level. Uh, so when you look at it, it's, it's ki- a little bit overwhelming. It has like multiple heads and, er, a lot of things moving. Uh, but there's kind of these structural moments where you would kind of see more of this, uh, clean, uh, kind of, uh, sections where it's attending to. [MUSIC] VOkay. So, um, how, how did we do that? And so starting from kind of the the regular attention mechanism, we know it's, uh, a weighted average of the past history. Uh, and the nice thing is, uh, however far it is, we have direct access to it. So if we know, uh, there are kind of motifs that occurred, uh, in in early on in the piece, we're still able to based on, uh, the fact that things that are similar, uh, to be able to retrieve those. Um, but, uh, it also becomes, all the past becomes kind of a bag of words, like there is no structure of which came, uh, before or after. So there's the positional sinusoids that Ashish talked about. That, uh, basically in this, uh, indices indexes into a sinusoids that are moving at different speeds. And so close-by positions would have, uh, a very similar kind of, uh, cross section into those multiple sinusoids. Uh, in contrast for, er, for convolutions, you kinda have this, uh, fixed filter that's moving around that captures the relative distance. Like 1B4, 2B4. And these are kind of, uh, in some ways like a rigid structure that allows you to be, uh, a kind of, uh, bring in the, the distance information very explicitly. Um, you can imagine relative attention, um, with the multiple heads, uh, at play, uh, to be some combination of these. So, uh, on one hand, you can access, uh, the the history very directly. On the other hand, you also know, er, how you rel- relate to this history. Uh, capturing for example, like translational invariance and, er, and we, uh, and for example, we think one of the reasons why in the beginning, uh, priming samples that you heard that the, uh, music transformer was able to generate beyond the length that it was trained on at a very coherent way, is that it's able to kind of rely on this translational invariance to to carry, uh, the relational information forward. So, if we take a closer look at how how how the, how this works is, uh, the regular transformer you have, you compare all the queries and keys, so you get kind of this, uh, square matrix. You can think of it as like a self similarity, uh, matrix, so it's, uh, a square. Uh, what relative attention does is, to add an additional term that thinks, uh, that thinks about whenever you're comparing two things, how far are you apart? And also based on the content, do I, do I care about things that are two steps away or three steps away or I maybe care about things that are recurring, at kind of a periodical distance. And, uh, with that information gathered, that influences, uh, the the similarity between positions. And in particular, uh, this extra term is based on, um, the distance. So you wanna, uh, gather the embeddings, uh, that's irrelevant to the, uh, the query key distances, uh, on the [NOISE] on the logits. So, in translation, this, uh, has shown, uh, a lot of improvement in, um, for example English to to German translation. Uh, but in translation, the sequences are usually quite short. It's only a sentence to sentence. Uh, a translation for example, maybe 50 words or 100 words. But the music, er, samples that you've heard are in the range of 2,000 time-steps. So it's like 2,000 tokens need to be able to fit in memory. So this was a problem, uh, because the original formulation relied on building this 3D tensor that's, uh, that's very large in memory. Um, and and why this is the case? It's because for every pair, uh, you look up what the, what the re- so you can compute what the relative distance is, and then you look up an embedding that corresponds to that distance. So, um, for like this there's a length by length, like L by L, uh, matrix. You need like, uh, to collect embeddings for each of the positions and that's, uh, depth D. So that gives us the 3D. What we realized is, you can actually just directly multiply the queries and the embedding distances. [NOISE] And they, uh, come out kind of in a different order, because now you have the queries ordered by a relative distance, but you need the queries ordered by keys, uh, which is kind of a absolute by absolute, uh, configuration. So what we could do is just, uh, do a series of skewing, uh, to to put it into the right, uh, configuration. And this is, uh, yeah. Just a, just a quick contrast to, to show, um, the difference in memory requirements. So, er, a lot of the times the challenge is in, uh, being able to scale, uh, you know, being able to be more memory efficient so that [NOISE] you can model longer sequences. So with that, uh, this is, um, I can play you one more example if we have time. But if we don't have time, we can, go ahead. We'll see more of that. Okay. [LAUGHTER] So this is, this is, uh, maybe a one, uh, about a one-minute sample and I- I hope you like it. Thanks. [MUSIC] Thank you for listening. [APPLAUSE]. [LAUGHTER] Thanks, Anna. Um, um, great. Um, so to sort to, um, so relative attention has been a powerful mechanism for, um, a very powerful mechanism for music. It's also helped in machine translation. Um, one really interesting, uh, consequences of, uh, of, um, one really interesting consequence of relative attention in, uh, images, is that, um, like convolutions achieve, uh, convolutions achieve translational equivariance. So if you have, let's say, you wa- uh, you have this, this red dot or this feature that you're computing at this red dot, it doesn't depend on where the image of the dog is in the image, is in the the larger image. It just doesn't depend on its absolute location. It's going to, it's going to produce the same activation. So you have- convolutions have this nice, uh, translation equivariance. Now, with, with relative, uh, positions or relative attention, you get exactly the same effect because you don't have any- once you just remove this notion of absolute position that you are injecting [NOISE] into the model, uh, once you've, once you've removed that, then your attention computation, because it actually includes I mean, we've, we've- Niki and I couple of others have actually, and Anna were actually working on images and seems- and it seems to actually show, uh, better results. Um, this actio- this now satisfies this, uh, uh, the- I mean, it, it can achieve translation equivariance which is a great property for images. So there's a lot of- it seems like this might be an interesting direction to pursue if you want to push, uh, Self-Attention in images for a self-supervised learning. Um, I guess on, on self-supervised learning so the geni- generative modeling work that, that I talked about before in, in itself just having probabilistic models of images is, I mean, I guess the best model of an image is I, I go to Google search and I pick up an image and I just give it to you, but I guess generative models of images are useful because, if you want to do something like semis-, uh, uh, self supervised learning where you just pre-train a model on a lot of- on a lot of unlabeled data then you transfer it. So hopefully, this is gonna help and this is gonna be a part of that machinery. Um, another interesting, uh, another indus-interesting structure that relative attention allows you to model, is, uh, is, is kind of a graph. So imagine you have this, uh, you have this similarity graph where these red edges are, are this notion of companies, and the blue edge is a notion of a fruit, uh, and um, an apple takes these two forms. And, uh, and you could just imagine relative attention just modeling this- just being able to model, or being able to- you, you, yourself being able to impose these different notions of similarity uh, between, uh, between, uh, different elements. Uh, so if you have like, if you have graph problems, um, then relative self-attention might be a good fit for you. Um, there's also, there's also a simi- quite a position paper by Battaglia et al from Deep Mind that talks about relative attention and how it can be used, um, within graphs. So while we're on graphs, I just wanted to- perhaps might be interesting to connect, um, uh, of- some, uh, excellent work that was done on, uh, on graphs called Message Passing Neural Networks. And it's quite funny, so if you look at, if you look at the message passing function, um, what it's saying is you're actually just passing messages between pairs of nodes. So you can just think of self attention as imposing a fully connect- it's like a bipe- a full, a complete bipartite graph, and, uh, you're, you're passing messages between, you're passing messages between nodes. Now message passing, message passing neural networks did exactly that. They were passing messages between nodes as well. And how are they different? Well, the only way that when- well, mathematically, they were only different in that message passing was, was, uh, forcing the messages to be between pairs of nodes, but just because of the Softmax function where you get interaction between all the nodes, self attention is like a message passing mechanism, where the interactions are between all, all nodes. So, uh, they're, they're like, they're not too far mathematically, and also the me- the Message Passing Paper introduces an interesting concept called Multiple Towers that are similar to multi-head attention, uh, that, that Norman invented. And, uh, it's like you run k copies of these message passing neural networks in parallel. So there's a lot of similarity between existing, you know, this connects to work that existed before but these connections sort of came in later. Um, we have a graph library where we kind of connected these both, both these strands message passing and, uh, we, uh, we put it out in tensor2tensor. Um, so to sort of summarize, um, the properties that Self-Attention has been able to help us model is this constant path length between any two, any two positions, and it's been, it's been shown to be quite useful in, in, in, uh, in sequence modeling. This advantage of having unbounded memory not having to pack information in finite, in, in sort of a finite amount of- in a, in a fixed amount of space, uh, where in, in our case our memory essentially grows with the sequences is, is helps you computationally, uh, it's trivial to parallelize. You can, you can crunch a lot of data, it's uh, which is useful if you wanna have your large data sets. We found that it can model Self-Similarity. Uh, It seems to be a very natural thing, uh, a very, a very natural phenomenon if you're dealing with images or music. Also, relative attention allows you to sort of, gives you this added dimension of being able to model expressive timing and music, well, this translational equivariance, uh, it extends naturally to graphs. Um, so this part or everything that I talked so far was about sort of parallel training. Um, so there's a very active area of research now using the Self-Attention models for, for, for less auto-regressive generation. So notice a- at generation time, notice that the decoder mask was causal, we couldn't look into the future. So when we're, when we're generating we're still generating sequentially left to right on the target side. Um, so, um, and, and, and, and why, why is generation hard? Well, because your outputs are multi-modal. I f you had- if you want to translate English to German, there's multiple ways and, and, and your, your second word that you're translating will depend on the first word. For example, if you, if you first- the first word that you predict was danke, then that's going to change the second word that you predict. And if you just predicted them independently, then you can imagine you can just have all sorts of permutations of these which will be incorrect. Uh, and the way we actually break modes is just- or we make decisions is just sequential generation. Once we commit to a word that makes a decision, and then that nails down what's the next word that you're going to predict. So there's been some, there's been some work on, it's an active research area, uh, and you can kind of categorize some of these papers like the non-autogressive transformer of the fast- the third paper, fast decoding. Um, the fourth paper towards a better understanding of all Vector Quantized Auto-encoders into this group, where they're actually make- doing the decision making in a latent space, that's being, uh, it's e- either being learned using word alignments, uh, fertilities, or that's being learned using Auto-encoders. So you make- you do the decision making in latent space, and then you- once you've made the decisions in latent space, you assume that all your outputs, are actually conditionally independent, given that you've made these decisions. So that's how they actually speed up. There's also- there's ano- there's another paper. The second one is a paper that does Iterative Refinement. There is also a Blockwise Parallel Decoding paper by Mitchell Stern, uh, Noam Shazeer, and Jakob Uszkoreit, uh, where they essentially just run multiple models like, uh, and rescore using a more- a decode using a faster model and score, using the more expensive model. So that's how it sort of it speeds it up. Um, [NOISE] transfer learning has had the- Self-Attention has been beneficial in transfer learning, GPT from OpenAI and BERT are two classic examples. There's been some work on actually, scaling this up, like add a factor as, uh, efficient optimizer. Um, there's a, there's a recent paper by Rohan Anil and Yoram Singer. Um, there's also Mesh-Tensorflow, which actually they've been able to train models of just several orders of magnitude larger than the original models have been trained. So there's, I mean, when you're working this large data regime you would probably want to memorize a lot of- you want to memorize a lot of things inside your parameters used to train a larger model. Uh, Mesh-Tensorflow can uh, can let you do that. Um, there has been a lot of interesting work, universal transformers, sort of recurrent neural networks can actually count very nicely. There's these cute papers by Schmidhuber where he actually shows that recurring neural, the count- the cell mechanism just learns a nice counter, like if you're- you can learn kind of a to the n, b to the n, uh, with LSTM. So then, uh, universals transformers brings back recurrence in depth inside the transformer. Uh, there is a really cool Wikipedia paper, um, simultaneously with the image transformer paper that also uses local attention. Transformer-XL paper that sort of combines recurrence with Self-Attention, so they do Self-Attention in chunks, but they sort of summarize history by using recurrence, it's kinda cute. It's been used in speech but I don't know if there's been some fairly big success stories of Self-Attention in speech. Uh, again, similar issues where you have very large, uh, um as positions to, uh, to do Self-Attention over. So yeah, um, self supervision is a- if it works it would be, it would be, it would be very beneficial. We wouldn't need large label datasets, understanding transfer, transfers is becoming very succe- becoming- is becoming a reality in NLP with BERT and some of these other models. So understanding how these, what's actually happening is a- is an interesting area of ongoing research for me and a couple. And a few of my collaborators and uh, multitask learning and surmounting this, this quadratic problem with Self-Attention is an interesting area of research that I- that I'd like to pursue. Thank you.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_11_Convolutional_Networks_for_NLP.txt
The plan for today is what I am gonna talk about is the topic of convolutional neural networks. So essentially, um, there's actually quite a lot of content in this lecture of different things that's good to know about, since essentially this is going to be learn about convolutional neural networks in one large bite for NLP. So, um, bit on announcements, explain the general idea of convolutional neural networks, and then for quite a bit of it, I want to go through in sort of some detail to particular papers that made use of convolutional neural networks for text classification, sentence classification tasks. Um, the first is a sort of a pretty simple, um, CNN that was done in 2014, and then the second one is a way more complex CNN that was done much more recently in 2017. Okay. But first, a couple of announcements. Um, firstly, the last reminder on the mid-quarter feedback survey. So tons of you have done the- this already. Thank you, thank you very much. Um, but if you'd still be putting it off till the very last minute, um, tonight at midnight is your last chance, um, to fill in the mid-quarter survey to get your, um, to give us feedback and to get your half-a-point. Um, okay. And then the other thing that you should be thinking about, and I know lots of you are thinking about since I spent three hours talking to people yesterday, um, is about final projects. Um, and so make sure you've got some plans from that, um, in place for, um, 04:00 p.m, uh, 04:30 p.m. Thursday. I mean, in particular as we've discussed, um, your- part of what you're meant to do this year is to have found some research paper, have read it, and have a summary and thoughts as to how it can inform your work. Um, and then just make sure you have in your calendars, um, the final project poster session for CS224n, which is gonna be in the evening of Wednesday March 20th, and we're holding it at the Alumni Center. Okay. Um, one more sort of announcement or just general stuff to cogitate. Um, so we're now officially in the second half of the class. Congratulations. Um, and, you know, there's sort of still a few things that we want to teach you that are sort of basic, and actually convolutional neural networks is one of them. But, I mean, nevertheless in the second half of the class, I mean, things start to change and we're hoping to much more, um, prepare you for being real deep learning NLP researchers or practitioners. And so what does that mean concretely? Well, the lectures start to be less giving every detail of how to build a very basic thing, and more giving you some ideas to sort of some of the work that's been done in different areas. And so to the extent that there's something of interest or rele- relevant to a project or things like that. Um, the hope is that while you can take some initiative to find out more about some of the things that are being talked about. Um, also would really welcome any questions about things that people, um, would want to know more about. And the other thing that you should know about deep learning is that once we get past the fundamentals, a lot of the stuff we teach just isn't really known science or things that people are sure of that, you know, most of what I'm teaching in the second half of the course is pretty much what people think is good practice in 2019. But, you know, the fact of the matter is what people think is good practice in deep learning has been changing really rapidly. So if you go back even two years or definitely if you go back four years, right? There's just a lot of different things that people used to believe, and now people have some different ideas as to what works best. And it's perfectly clear that come 2021 or 2023, there will be some different ideas again as to what, um, people think is best. So you sort of just have to accept that this is, um, a nascent rapidly emerging field and it's good to understand the fundamentals and how things fit together. But after that, quite a bit of the knowledge is this is what people think is good at the moment and it keeps evolving over time. And if you want to stay in the field, or doing things with deep learning, you kind of still have to keep up with how it changes. It's called lifelong learning these days. It's a very trendy concept. Um, and so as well as the lectures, this is also true for the assignments. Um, and, you know, we've been trying to make the assignments so that they started off very introductory, and gradually started to use less scaffolding, and we're going to hope to, um, continue that, um, with the sort of less hand holding in assignment five. And, you know, I guess what we're hoping to do is prepare you both for the final project and for real life. I guess I was making an analogy this morning, um, comparing this to the sort of intro CS sequence, so when there's CS106A and B that have tons of scaffolding, and then in CS107, you're meant to learn how to diagnose and solve problems for yourself in a debugger that is kind of the same, um, for neural networks that, you know, for the early assignments, um, you know, we've given you every bit of handholding here, all of these tests to make sure every little bit of it is okay, and here's exactly how to structure things. But, you know, in the real world, um, you're only going to be able to build and use neural networks. If you can figure out why they're not working and what you have to change to make them work. And, you know, the truth is as I talked a bit about last week, you know, that's often well more than half of the job that it seems easy enough to stick down. Here's my neural net and the pieces that make sense to me, and then you can spend the remaining 80 percent of the time scratching your head wondering why it doesn't actually work well, and how you could change it to make it to work well. Um, so, um, I confess that debugging neural nets can often be hard, but, you know, the goal is that you should actually learn something about doing it, and that's kind of one of the learning goals of the course when it comes down to it. Um, final little advertisement. If you feel like you'd like to read a book, um, just out this week, there's a new book on natural language processing with PyTorch by Delip Rao and Brian McMahan. Delip actually lives in San Francisco. Um, so, um, if you want to, you can buy a copy of this, of course. But if you don't want to, um, buy it and you feel like having a bit of a look through it, um, the Stanford library is actually has a license to the O'Reilly's Safari Books collection. So you can start off at library.stanford.edu and read it for free. There's one catch to this which is the library only has 16 simultaneous licenses to Safari Books. So if you'd also like your classmates to be able to read it for free, it really helps if you remember to log out of Safari Books Online, um, when you're done looking at it. Um, yes, so this is sort of a, I mean, in some sense, I hope you will feel if you look at this book, "Boy, I already know most of that stuff already. It's not a super advanced book. But it's a good well-written tutorial of how to do things with PyTorch and NLP." If you don't feel like you know most of the stuff in this book, you can let me know but I will be a little sad. Um, okay, um, yeah. So, let, so starting into today. Um, so, we spent a lot of time on recurrent neural networks and they are great for many things. Um, but there's sort of some things that they're not so good at. So, you know, we kind of might like to know about a phrase like my birth, or a bigger phrase like of my birth, and there's sort of no independent, um, representation of those spans in a recurrent neural network. We kind of get sort of prefixes of a whole sentence. And while we did, um, bidirectional, um, recurrent neural networks, and you could say, 'Well, wait a minute you could use it in both directions' and to some extent that's true. We can get stuff from this direction and stuff from this direction, but we still kind of have sort of whole sequences that go to one end of the sentence or another. We don't just have pieces of sentences. And often, we'd like to sort of work out meanings of pieces of sentences, and so, we sort of have two problems here. We only have sort of initial and final sub-sequences. And also, if you look at these representations, like if you say, take this last state as the representation of the meaning of this text. What you find out, is it's very dominated by the meaning of the most recent words and what they are trying to predict as to what comes after them, and that's part of the reason why I mentioned last time in the question answering, um, lecture, the idea that well you can do better by having a sentinel and training something that has attention over the whole, um, LSTM structure. Okay. But today we're going to look at a different alternative which is convolutional neural nets, which are often abbreviated as either CNN's or ConvNets. Um, and the idea of these is, well, look maybe we could just take every sub-sequence of a certain length and calculate a representation for it, um, so that, you know, if we have some piece of text like, tentative deal reached to keep government open, and we could sort of just say, well, let's just take all three words sequences, tentative deal reached, deal reached to, reached to keep et cetera, and we're going to calculate some kind of representation for each of those sequences. So, this is an- isn't a strongly linguistic idea. Right? We're not worrying about whether it's a coherent phrase, that's grammatical linguistically valid, cognitively plausible, we're just taking every sub-sequence of a certain length. And then, once we've calculated representations of those, we're going to look at how to group them. Okay. So, let's get into more detail as to what CNN's are and how they work. Um, yeah, so, there's this general idea of a convolution which you may or may not have seen in some math or electrical engineering class. And then, there's the particular version of convolutions, the discrete convolutions, which you can means that you can use the friendly summation symbol rather than an integral. Um, and that's a, that's a discrete convolution. I find that that notation as completely unhelpful. So, I won't even try and explain it. But I've got lots of examples, and convolutions are really easy for neural nets in terms of what they do for examples. All right, so the classic case of where convolutional neural networks are used, is in vision applications. So, if you do CS231N next quarter, essentially you know, the first four weeks is just all doing convolutional neural networks in all their variants and glory. Um, and the sort of essential idea of, um, convolutions for a vision, is that you want to recognize things no matter where they appear in an image. So, you have a sort of property of translation and variance, and the idea of a convolution as a way of finding something in different places in the image, regardless of where it appears. Um, so this is the vision example which I stole from Andrew Ng's UFLDL website. And so, what a convolution is, is it's here a patch, but you can think of it as just as a vector, and the patch has weights which are these little numbers in red, and what you're gonna do, is slide that patch over the image as this as this animation does. Um, and so at each position, you're going to multiply each of the red numbers by the black number in that position, and then you're just going to sum them up. So, that's what a discrete convolution does, which is what that notation at the top is saying, right? You're multiplying things together and then you're summing them up, and so you're doing this, and then you're filling in the pink with the products, um, the sum products. So, it's sort of like, you're taking these sort of patch dot products and putting them into the pink matrix, and that's then your convolved feature. So, that's a 2D convolution, which for the rest of today, we're not going to look at anymore. So, this is all you're learning about vision. Um, and so we're now going to go back and look at 1D convolutions, which is what people use when they're using convolutional neural networks for text. So, the starting point of a convolutional neural network for text, is we have an input. So, here's my sentence and for each word in the sentence I have here got a dense word vector, I made it a 4D, want to keep it small in my example but usually as you know, it's more. So, our starting point is we have some input, you know, input could just be a one-hot encoding that's not forbidden here, but normally we'll have these kind of dense word vectors. And so, then it's sort of the same as the 3D as the 2D one, apart from we've only got one dimension. So, we have a filter. Um, so here is our filter, and so our filter is gonna do three steps and time, three words. And that's going to work across the dimensions. So, these different dimensions in the convolutional neural network often get referred to as channels. So, we're kind of working across the input channels, and so we have a patch like this. And we're going to take this patch and put it on top of the first three words. I don't have as good an animation as the previous slide. Sorry. And we're going to work out the dot product, um, between those, and I did that at home by putting this into Excel. And the answer [LAUGHTER] to that, is that the product is minus 1.0. And then at that point, we slide our, We slide this, um, matrix which gets referred to as a kernel or a filter which is the patch that we're using for our convolutional neural network. We slide it down one and do the dot product of those terms again. And that comes out as minus a half and we keep on sliding that down and we get what, um, gets what's shown on the right as our output. So at this point, we've just reduced the sentence, um, to a single vector. Um, and that seems like we might want to do more than that. Um, but the other thing that you will have noticed is that our sentence is sort of shrunk because before, you know, we had a seven word sentence but because I've just sort of slid this three word, um, kernel down here, I ended up with only five positions to put it in. So it's become a five word thing. Um, so to first of all address that problem, commonly when people do convolutional neural networks, they add padding. Um, so what I can do is I can add zero padding at both ends and then sort of do the same trick and say run a convolution on that. And now, I'll be able to put my size three filter in seven different places as I slide it down and so I'm getting out a vector that's the same length of my input. Um, that, you know, there are different way, so this is the most common way of doing things. And it's kind of seems logical because it maintains size. I mean, you know, there's always more than one way to do it. Um, if you really wanted to, you, oops, I don't want you, yeah, there, oops, I made, uh, I made a slight mistake on my slide because this turns out which I was about to get to in a minute but I'll just explain this bit here anyway [LAUGHTER]. Um, you know, if you wanted to, you could have two steps of padding on both ends here. So that your first convolution we'll be looking at zero, zero, 10 to the of and then the convolution would actually grow the size of your input. Yeah. But, yes. So I mean, so what we've done so far, we've started with these word vectors which in convolutional neural networks terms were of length four. So our kind of input had four channels. But when we were back here, um, we were just producing from this, um, kernel, one column of output. So our output has only a single channel. So we've sort of shrunk things in the columns direction from four to one. And that might seem bad. And for many purposes, it is bad. Um, and so, a lot of the time, what you want to do is to say, well, rather than have only one filter, instead of that, why don't I have several filters? So here I've got three different filters and each of these filters is just sort of the same size three, three the size, the kernel size times the input, number of channels for the matrix. So I have three different filters and I'm going to run each one down the text and get a column here. So now, I'm ending up with three columns of output. And so I have this sort of a three channel output. And the way to intuitively think of this is for these filters, well, you know, for what we do in neural networks, we're going to learn them by backpropagation like everything else. But our hope is that these filters could somehow specialize in different things. So maybe this filter could specialize on, is this language polite? And it will produce a high value whenever it sees polite words. And maybe, um, this, um, filter could specialize on, I don't know, eating and it will have a high value whenever it sees words about food and you know this filter will do a third thing. And so that's the sense in which people sometimes talk about, um, the, um, what you're getting is output of different features because your hope is that you'll kind of gain different latent features coming out of the text. Okay. So that gives us a representation and that's sort of a useful sort of having found learn features in our text. That quite often though, what we'll want to do is just summarize the text with re- with respect to those features. So you might just have the question of, well, in this piece of text, um, is it polite and does it talk about food? So another operation that we'll quite often do is wanna summarize the output of a convolutional network. And the simplest way to do that, is for 1D convolutions, is called max pooling over time. So if we max pool over time, that each of the channels or otherwise known as features, we're just simply going to look down and see what is its maximum value, 0.3, 1.6, 1.4. Um, and so, you know, if I use my story about the first two, um, filters, it's sort of saying, well, it's not very polite text but it's really about food, right? That we're sort of summarizing, um, what we've detected there. Um, so the concept of max pooling in some sense captures, does, is this thing being activated anywhere, right? So if we have things like politeness and about food, that the output of max pooling will have a high value. If somewhere in the sentence there was a clear marker of politeness or something clearly about food. And that's often a useful notion because often what you want to know is, you know, is there some discussion of food in this sentence or is there not? There's another thing, there are other things that you could do. Instead of, ah, max pooling, you can instead do average pooling. So here you just take these numbers and find the average of them. That then has the different semantics which is sort of what's the average amount of politeness of this, um, text or on average how much, you know, how, what percent of the sentence is about food or something like that. Um, for some purposes, this is better because, you know, it takes in all of the important builds to an average. I mean, a lot of the time, people have found that actually max pooling is better because, you know, a lot of signals in natural language are sparse. You know, no matter how polite you are trying to be, you're not going to be being polite in every word. You're going to say nouns and articles like that and a, and prepositions and conjunctions, none of which are inherently polite, right? Um, so that if there's some politeness showing up prominently, then the sentence becomes polite and max pooling is actually better for capturing that. Um, of course the one other kind of thing that you can do as min pooling and find the least [LAUGHTER] active thing. Um, it doesn't get used much but you could do that as well. Okay. So, um, so if you're in PyTorch, this is all pretty easy stuff to do. So there's a handy dandy Conv1d. There's also a Conv2d as you might guess for vision. But there's a Conv1d, um, where you're specifying how many input channels there are. That was our word embedding size. How many output channels there are? We have three. What the size of the convolutional kernel is? So the ones that we were showing were also three and then there are various other parameters you can have. Like you can say that you want a padding of one and things like that. And then once you've got one of those, you can just sort of run your convolutional filter on the input to get a new hidden state. And then if you wanna max pool, you can just max, um, through the output of that and then you've got a max pooled output. Okay. So that gives us the basics of building a kind of a convolutional neural network, um, for, um, NLP. Does that sort of makes sense up until there? Yeah. Okay. So next bit is to sort of show you three or four other things that you can do. Um, I started off typing these slides other less useful notions because I kinda thought, oh, at least they don't really come up much in NLP. But, you know, actually it turned out when I got on to that second paper, when I say the complex convolutional neural network, actually, in that paper they try out just about all of these things that I say no one uses. So it's sort of good to know what they are for looking at various papers. So here, when we did things so far then we were calculating these convolutions, that we're sort of trying them out at every position. So we had one for zero, tentative deal. Then for tentative deal reached then deal reached to. And so we were just walking down one step at a time which is referred to as a stride as, of one. And that's by far the most common thing to do. But you could observe, look wait a minute, since the first convolution concerns zero tentative deal. I've got all those three words in there. Even if I skip down to a next did, deal reach to and then I did to keep government, I'd still have in one or other of the convolutions every word of the sentence so I can do half as much computation and I've still got everything in there in some sense. And so that's referred to as using a stride of two. And so then I get something with half as many rows out. So it's one way to sort of compactify your representation and produce something shorter from a longer sentence and we'll see that use of it coming up later. There's other ways to compactify what cut representation that comes out of your sentence. And so there's a different notion of pooling which is local pooling. Now, if if you've seen any of the vision world when people talk about max pooling and vision, they normally mean local pooling as opposed to the max pooling through time that I showed you first. So here we're sort of back to where we started and we've done our size three stride one convolution which is producing output as before. But now, what I'm gonna do is local pool with a stride of two. Which means I'm gonna take each two rows and I'm gonna pool them together into one row and I could do that again by either maxing or averaging or whatever appeals to me. So I take the first two rows, I max pool them I get this. I take the next two rows, I max pool them I get this. Next two, next two and I sort of pad it on the bottom so I have two rows at the bottom. And so that's then give me a local max pooling of a stride of two. And that sort of had exactly the same effect in the sense but with a different result as using a stride of two in my convolution because I have again reduced it to something of four rows that used to be eight rows. Yeah, picture that. Okay so that's that one. What else can you do. There are more things you can do to make it complex. Another thing that people have sometimes done is k-max pooling. And so this is a more complex thing and it's sort of saying well, rather than just keeping the max over time, if a feature is being kind of activated two or three times in the sentence, maybe it'd be good to record all the times that it's activated in the sentence while throwing away the rest. So in k-max pooling, and I'm doing two max here, you look down this column and you find the two highest values for that column. But then you put the two highest values not in the order of highest to lowest, but in the order in which they are in these columns. So it's minus 0.2, 0.3 for this one and it's 1.6, 0.6 for this one because it reflects the orders of the columns up above. Okay. Almost done, one more concept. This is another way of compressing data which is a dilated convolution. So if you have a dilated convolution, so dilated convolution doing it over here doesn't really make sense but where you can use a dilated convolution is if I take this and put it through another convolutional layer, we can kind of have deep convolutional networks that have multiple convolutional layers. So the idea of a dilated convolution issue is you're gonna skip some of the rows. So if you use a dilation of two starting at the top, you're going to take the first, third, and the fifth row and multiply them by my fil- sorry, I have different filters. Multiply them by my filters and then get the values that appear here. And then if stride as one, you'd then use, you would go on and sort of do the next spread out rows. And so this allows you to have convolutions that see a bigger spread of the sentence without having many parameters. So you don't have to do things this way. You could have said, look, I could just instead have convolutions with a kernel size of five. And then they'd say five, see five words in a row but then I'd be having sort of bigger matrices to specify my feature. Whereas, this way I can keep the matrices small but still see a bigger range of the sentence in one operation. Yeah and that concept of how much of a sentence you see is kind of an important notion in convolutional neural networks. Because, you know, if you start at the beginning of a sentence and you're just running three-by-three convolutions, um, you're sort of seeing these three word patches of the sentence. And it turns out in natural language that's already actually quite a useful representation. Because sort of having those kind of n-grams as features is just good for many purposes including text classification. But if you want to sort of understand more of the semantics of a sentence, somehow you wanna see more of that at once. And you've sort of got several tools you can use to see more of it once, you can use bigger filters, you could use, uh, kernel size five, seven, nine or something convolution. You could do something like dilated convolution so you can see spread out pictures. And the third thing that you can do is you can have depth of a convolutional neural network. Because as you have greater depth of a convolutional neural network, you see more. So at this first layer, the rows now have sort of info about three words in them. And if you sort of just stuck a second layer of convolutional neural network with the same general nature on top of it and you sort of take the first three rows and convolve it again then and then the next ones that those then know about five words of your original input sentence. So as you kind of have a deeper ConvNet stack you start to know about bigger and bigger patches of the sentence. Okay. All good? Any questions? No, that's good, okay. So, um, the next piece is essentially shows you this stuff again, um, in the context of a particular paper. So this was, um, a paper by Yoon Kim who was a Harvard student, maybe still is a Harvard student, um, in 2014. So this was sort of a fairly early paper. Um, and he wanted to show that you could use convolutional neural networks to do a good job for doing text classification when what you want to classify is a single sentence. So, the kind of thing you might want to do is look at the kind of snippets of movie reviews that you see on the Rotten Tomatoes site and say, "Is this a positive or is this a negative sentence description?" And the model he built is actually kind of similar to the convolutional neural networks that Collobert and Weston, um, introduced in their 2011 paper that we mentioned before when we were talking about window-based classifiers. So, in their paper they actually use both window-based classifiers and the convolutional classifier. Okay. Um, so yeah, I sort of already said this. So their tasks are sentence classification, could be sentiment. It could be other things like, is this sentence subjective or objective? So objective is what the main news articles are meant to be and subjective is what the opinion pieces are meant to be. Um, and then other things like question classification. Is this a question asking about a person, location, number, or whatever? Okay, so here is what he did. And it's sort of the- these slides sort of, um, use the notation of his paper which is sort of a little bit different the way the math gets written down to what I just showed you, that it's really doing exactly the same thing. So we start with word vectors of length k. Um, the sentence is made by just concatenating all of those word vectors together and then, when we- so we have a range of words, it's a subpart of that sentence vector. And so, the convolutional filter is just being represented as a vector because here he's flattened everything out into one long vector for the entire sentence, whereas I'd sort of stepped into a matrix. Um, so a size three convolution is just a real vector of length hk, the size of the convolutional filter times the dimensionality of the words. Um, and so, what he's gonna do to build his text classifier is use convolutions made out of different sizes. So you can have size two convolutions, size three convolutions as shown here, and bigger convolutions. And so, um, so to compute a feature one channel for our CNN, we're then doing a dot product between the weight vector of the feature times this sub-sequence of the same terms, and he sort of also put in a bias which I sort of omitted. Um, and then putting it through a non-linearity, um, which I wasn't doing either. Um, but as sort of we've seen a ton of. Um, and so, what we're wanting to do is that's our, um, feature and we want to, um, do it through all this- for a feature of kernel size three, we're gonna go all the way through the sentence. The other thing he did though was slightly funnel funny is, his windows were sort of lopsided in the notation, right. There's a word and th- the, um, h minus 1 words to the right of it. So he has padding here just on the right end whereas most people do their convolutions symmetrically in both directions around things. Okay. And so, we're going to do that for a bunch of features or channels Ci and therefore compute our convolved representations just as we've talked about. Okay. Um, then he does just what we talked about. Um, there's max over time pooling in the pooling layer to capture the most relevant things and is giving us a single number for each channel. Um, and we have features that look at different that have different kernel sizes. Um, here's one other idea he used which is possibly a neat idea. Um, he knows one of the things that you could even think about in various ways, um, for say a question answering system among other things. Um, and so he used pre-trained word vectors. Um, but what he did was he actually kind of doubled the word vectors. So, for each word he had two copies of the word vector, and so you have sort of two channel sets and one set he froze and the other one he fine tuned as he trained. So it's sort of he tried to get the best of both worlds of sort of fine tuning and not fine tuning and all that went into the max pooling operation. Okay. Um, so, after the max pooling we get out one number for each channel and so, um, he has something of three size convolutions, three, four, five, 100 features for each size. So we're getting out a vector of size, um, 300 at that point, and at that point you're taking that final vector and just sticking it through a softmax and that's then giving your classification of the classes. Um, so all of that can be summarized in this picture if it's big enough to sort of read. So, here's our sentence. I like this movie very much, which has you know, our word embedding dimension is five, and so then doing it in this example, we are having two channels for each kernel size and we consider kernels of size two, three, and four. Um, and so and then we are getting two different ones. Um, so we're getting, um, six. This is showing six of our filters. Um, so we apply those. When we- when we apply those filters without any padding, we are then getting out these outputs of the filters which are of sizes four, five, and six respectively. Um, and so then once we've got these for each of these sets of numbers we're doing one max pooling. So, we're just taking the max of each of these, um, output features which gives us these six numbers. Um, we can concatenate them all together into one vector which we feed into, um, a softmax over two classes as to whether sentiment is positive or negative. Um, so that's basically the model. So something- so this is sort of really actually a very simple, very computationally efficient, uh, model as to how to build a text classifier. [NOISE] Um, yeah, just a couple more things to get through, um, so in one of the assignments, we talked about Dropout [NOISE] and you used it. So, um, you know, hopefully you're all masters of Dropout at this point. Um, so he was using Dropout, um, and this being 2014 and the, um, Dropout paper only coming out in 2014. I guess, there'd been an earlier version that came out a couple of years earlier. This was sort of still fairly early, um, to be taking advantage of Dropout. So that while training, you've got this sort of Dropout vector, um, where you sample your Bernoulli random variables and you're, sort of, um, sort of, designed to drop out some of the features each time you are doing things. At testing time, you don't do the dropout, but because before you were sort of dropping out a lot of stuff, you're scaling your weight matrix by the same probability that you use for dropping out, so that you get, sort of, vectors of the same scale as before. Um, so as we sort of discussed in the assignment, Dropout is a really effective form of regularization, widely used in neural networks. Um, he didn't only do that, he actually did, a kind of another sort of funky form of regularization. So that's for the softmax weight vector, he constrained the L2 norms, so the squared norms of the weight vectors and the softmax, [NOISE] um, matrix, um, to a fixed number S, which was sort of set of the hyper-parameters, actually set to the value three. Um, and if your weights were getting too large, they were being rescaled, um, so they didn't blow up. Um, this isn't a very common thing to do. I'm not sure it's very necessary, um, but, um, I guess it gives you some- I mean, I guess by showing you a few of the details of this one, my hope is, sort of, gives you some ideas about how there are lots of things you can play around with and muck with if you wanna try different things, um, for your final projects. Um, okay. So here are some of his final hyperparameters. So he's using ReLU nonlinearities, um, window sizes of three, four, and five, the convolutions, hundred features or channels for each size, um, Dropout of a half as usual. Um, you get several percentage improvements from dropout, which is quite common actually. Um, the sort of L2 constraint, s equals three, mini batch of 50, 300 dimensional word vectors, train to maximize dev set performance. Okay. And here is the big table, you know, I was too lazy, um, to redo of performance on these different text classification data sets. Um, there are lots of different ones. So these two are both Stanford Sentiment Treebank. This is the Subjective Objective Language. This is the Question Classification, of is it asking for a person name and location, a company or whatever. Um, this is, um, talking about, sort of, a perspective, which is another classification thing. Consumer Reports is another sentiment one. Um, so lots of data sets and then here are lots of models. So the model- some of the models down here or here, are traditional feature-based, um, classifiers. Um, so in particular, um, sort of Wang and me back in 2012, had sort of pointed out that by taking certain steps with n-gram features and other forms of normalization, that you could actually get quite good results with just the traditional feature, um, based classifiers. So many people use that as a baseline for showing that you can do better things. Um, the ones up here, were tree structured neural networks that my group was very fond of in the early 2010s and then up at the very top, uh, his CNN models. And as you can see, it's sort of a mix. Sometimes the CNN model wins, like in this column and this column, sometimes it doesn't win like in these columns. Um, but in general, um, what you didn't see from this is that, you know, this is an extremely simple, um, convolutional neural network model and it actually does, um, kind of well on this system. Um, you can quibble with this results table, and again in terms of like writing your propos- project proposal, um, one thing that you should do is kind of think about what you're reading, um, because, you know, a lot of papers aren't perfect and there are reasons to quibble with what they claim. And sometimes if you think about what they're claiming and whether it's reasonable, um, there are reasons why it's not or there are ideas of how you could do things differently or show something different. I mean, the main reason why you could quibble with, um, Yoon Kim's results table is, well, he already said, as I had a couple of slides back, um, that the statement that Dropout gives you two to four percent accuracy improvement in this neural nets. [NOISE] Um, but most of these systems because they are older and were done before Dropout was invented, um, didn't make use of Dropout. But, you know, any of these sort of neural net systems up here could have used Dropout and presumably it would have given them a couple of, um, percent gain as well. So arguably, this is sort of a biased, unfair comparison. And the right thing would have been to be comparing all the systems, um, using Dropout. Um, but, you know, despite that, you know, this was still a prett- a lot of people noticed this paper because it showed that using this sort of very simple, very fast convolutional architecture, could give you strong results for text classification. Um, that's that. Yes. So in summary, you know, something that you should be thinking about for projects and otherwise, we're effectively building up a bigger toolkit of different tools you could be using, um, for projects or future work or whatever it is. So starting off with, we had word vectors and then we could build bag of vector models by just taking the word vectors and averaging them. And, you know, that's actually a surprisingly good baseline to start with. We suggest to you in many cases for things like projects, you should use that. See how well it does, make sure you're working better. I mean particularly, you can do even better with that, if you sort of add some extra ReLU layers on top, which is an idea that's been explored in deep averaging networks. Um, then we looked at window models which were very simple. You're just taking these sort of five word windows and computing a feed-forward network on them, and they work very well for word classification problems that only need local context. Things like, part of speech tagging or NER. But then we've gone ahead and looked at some other models. And so, um, CNN's are very good for text classification, um, and they're very good because they parallelize really well on GPUs, which is something I'll come back to again later. So they, they just sort- the general sort of representing sentence meaning. They're actually a efficient, versatile, good method, which has been used quite a bit. And then they sort of contrast with recurrent neural networks. Recurrent neural networks have some advantages. They're sort of more cognitively plausible, because you're sort of reading through the text and, um, getting its meaning. Um, recurrent neural networks are good for things like sequence tagging and classification, building language models to predict what's coming next. Um, they can do really well when combined with attention. Um, but they also have some disadvantages. They're way slower than convolutional neural networks and if what you wanna do is get out some kind of overall meaning representation of a sentence, you know, "What does this mean? Are these two, um, phrases paraphrases with each other?" There are now many results that show that people don't get better results with recurrent neural networks. They can get better results using techniques like convolutional neural networks. Okay. [NOISE] So in the next step then [NOISE] is to, sort of, head towards our com- our complex, um, convolutional architecture example. So before getting to that, I just wanna sort of introduce a few concepts that we haven't seen, all of which, um, start to turn up when we do this. So we spent a lot of time in the sequence models part, talking about gated models or the gated recurrent units and the LSTM units. But the idea of a gate is general that we can sort of have this idea that we can calculate something, put it through, um, a sigmoid nonlinearity and gets a value between zero and one, um, or a vector of values between zero and one. And then do a Hadamard product with a vector and sort of gate it between its value and zero. So that suggests the idea that you could also apply gates vertically when you're building multilayer networks. And after the successive LSTMs had been proven, that was, um, an idea that really took off, was people start exploring, how can we have, use these ideas of skip connections and gating in a, in a vertical direction? And here are two versions of it. This one is a very simple one, but a very successful one that's basically just about a skip connection. So and this is referred to as a residual block and- which is used in residual networks, otherwise known as ResNets. Um, so in a residual block, for each block, you allow a value just to skip ahead to the next, um, layer. Or you can stick it through a conv block, and the typical conv block is you go through a convolutional layer, you then go through a ReLU nonlinearity, another convolutional layer, and then when you come out, you just sum these two values. So this is the same idea that sort of summing values is magical in the same way as an LSTM. And then you put the output of that through another ReLU, and this thing here is called a residual block and then commonly you'll stack residual blocks on top of each other. Um, there's one little trick here, um, which is you need to use padding, right? Um, because at the end of the day since you want to sum these two pathways, you want them to be the same size. And if you, sort of, have them shrinking in the conv blocks you wouldn't be able to sum them. So you want to, sort of, have a padding at each stage so they stay the same size here, and so that you can add them together. Um, here's, um, a different version of a block which is sort of more LSTM-ish and indeed this block was developed by Jürgen Schmidhuber and students, who's the same guy who's behind LSTMs and you can see the same thinking. It's called a highway block. So in a way it's sort of similar. You've got, you know, kind of thinking of moving an identity x that skips a nonlinear block or you can have it go through exactly the same stuff conv, relu, conv. The difference is that unlike this one, this time there's explicit gates so there's, um, and this T-gate and the C-gate. And so you're multiplying both of the path through here and the path through here by a gate just kinda like the sort of the get input gates that we saw before and then summing them together. So that sort of feels more powerful but it's not actually clear that it is more powerful. I mean, this one actually has a very simple semantic because if you think of the semantics of this one is the default is just you walk this way and you just sort of carry forward your value and do nothing. Um, so, what this block's job to- is to do, is to learn a delta that is meant to learn what kind of deviation you have from doing nothing. Um, so that's a nice simple semantic which, um, seems to work well in neural networks to learn things. Um, this sort of has more complicated apparent semantics because you're taking, you know, some parts of the identity multiplying by this sort of gate in a Hadamard product and some parts of this conv block multiplied by this other gate T in a Hadamard product. So that sort of feels more powerful as that gives me a lot more control because I can take pieces of the different ones and so on. If you think about it for a bit longer, I mean, mathematically it's actually not any more powerful that you can represent anything you can do with this one with that one. And the way to think about that is well, um, you know, here you're kind of keeping only part of the identity, um, but what you could do is keep the whole of the identity and see it as your job to subtract off the bits that this one isn't keeping over here in the conv block which you can do theoretically. Um, and so, you can sort of anything you can compute with this as a function, you can actually compute with a, um, ResNet block. Um, and so then as quite often in neural network land, the question isn't sort of, um, some kind of proof of compute- can be computed or not. It sort of comes down to learning and regularization questions as to whether one or the other of these actually proves better as something to use in a learning architecture. Okay. Second concept. Um, batch normalization. So when people are building deep convolutional neural networks, um, in the 2015 pluses, um, they almost always use batch normalization layers because this makes your life a lot better and if they're not using batch normalization layers, they're normally using one of the other variant ideas that people have suggested such as layer normalization which is sort of meant to do about the same thing. Um, so what batch normalization does? I mean, I think many of you will have seen somewhere in steps or otherwise the idea of doing a Z-transform which means you take your data, you work out its mean and you work out its standard deviation and then you rescale by subtraction and multiplication so that you have a set of data which has a mean of zero and a standard deviation of one. Most people see that, right? Yeah? Um, so batch normalization is effectively doing exactly that but in a weird way. So what you're doing is that you're taking each mini batch. So whatever just random 32 examples you've stuck in a mini batch, you're running them through a layer of your neural network like a ConvBlock that we saw before and you take the output of that mini batch and then you do a Z-transform on it. Um, and then it goes forward into the next ConvBlock or whatever, and the next time you have a different mini batch, you just Z-transform it. So it seems a little bit weird. You're just doing it on the output of these mini batches. Um, but that's proven to be a very effective thing to do. So that it sort of means that what comes out of a ConvBlock sort of always has the same kind of scale. So it doesn't sort of fluctuate a lot and mess things up and it tends to make the models just much more reliably trainable because, you know, you just have to be much less fussy about a lot of things. Because, you know, a lot of the things we've talked about, about initializing your parameters and setting your learning rates is sort of about, well, you have to keep the scale of things about right so they don't get too big or too small and things like that. Whereas, if you're doing this batch normalization, you're sort of forcing scale, um, to being the same size each time. And s o therefore, you kind of don't have to do the other stuff as well and it still tends to, um, work pretty well. So that's a good technique to know about. Okay. Um, one last thing to learn about. Um, there's a concept of, um, size one convolutions. Um, and actually, I guess I really sort of, um, renamed it- I named this wrong because I wrote down one by one convolutions because that's the term you normally see. But that's, um, the vision world where you have 2D convolutions. So I guess I should have just called this one convolutions. So you can have convolutions, um, with a kernel size of one and when you first see that, it seems like that makes no sense whatsoever because the whole idea of a convolution was I was taking this patch and calculating something from it. If I'm not looking at any other words, surely I'm calculating nothing. But what actually happens in the size one convolution, is if you have a number of channels that sort of in a previous layer if you'd calculated whatever it was, 32 channels or something like that. What the one by one convolution is doing is acting as a tiny little embedded fully-connected network over those channels. And so you're sort of doing a position specific fully-connected network, um, in- for each row of your data. And so you can do that, um, for various reasons. You can do it because you want to map down from having a lot of channels to having fewer channels or you can do it just because you think another non-linearity will help and this is a really cheap way to do it. Because the crucial thing to notice is that if you sort of put fully-connected layers over everything, they involve a lot of parameters whereas putting in these size one convolutions involve very few parameters because you're just doing it at the level of a single word. Um, okay. Um, two random things and then I'll go onto my complex model. Um, this is just a sort of almost a bias- aside but it just shows something different that you could do and it's something that you could play with. I mean, when we talked about machine translation, we talk about the SIC to SIC architecture that was introduced in 2014 and has been very successful for machine translation. But actually, the year before that came out, um, there was a paper, um, doing neural machine translation by Nal Kalchbrenner and Phil Blunsom in the UK. And this sort of was actually essentially the first neural machine translation paper of the modern era. If you dig back far enough, there are actually a couple of people that tried to use neural networks for machine translation in the '80s and '90s but this was sort of the first one that restarted it, and they didn't actually use a SIC to SIC architecture. So what they used was for the encoder, they used the convolutional neural networks. And so that they had a stack of convolutional neural networks that progressively shrunk down the input and then finally pulled it to get a sentence representation, and then they used a sequence model as the decoder. Um, so, um, that's sort of something that you could try in some other applications that for encoders, it's really easy to use convolutional neural networks. There has been work on using convolutional neural networks as decoders as well, though that's a little bit harder to get your brain around and isn't used nearly as much. Then the second thing I want to mention because we'll turn to it in just a minute is so, so far we've done Convolutional models over words so that our kernels are effectively picking up these word n-gram units of two-word or three word sub-sequences. And the idea that then developed fairly soon was well maybe it would also be useful to use convolutions over characters. So, you could run a convolutional neural network over the characters of the word to try and, um, generate a word embedding, um, and this idea has been explored quite a lot, um, it's part of what you guys are gonna do for assignment five is build a character level ConvNet, um, for your improved machine translation system. I'm not going to say sort of a huge amount about the foundations of this today, um, because Thursday's lecture is then talking about subword models and we'll go through all the details of different subword models. But, I wanted to show you a con- a complex convolutional neural network which is also used for text classification. So, essentially, the same task as Yoon Kim's model and this model actually is built on characters, it's not built on words. So, we are at the foundation of it, um, having a word-like model. Um, so, this is a paper from 2017, um, by, um, the four authors shown here, um, people working at Facebook AI Research, um, in France, um, and so, they kind of had an interesting hypothesis for this paper which was essentially to say, that, you know, by 2017 people who are using deep learning for vision were building really, really deep networks and fi- finding that they work much, much better for vision tasks. So, essentially to some extend, the breakthrough was these guys that once these ideas that emerged, it then proved that it wasn't just that you could build a six layer or an eight layer, um, Convolutional Neural Network for vision tasks. You could start building really, really deep networks for vision tasks which had tens or even hundreds of layers and that those models when trained on a lot of data proved to work even better. So, um, if that's what's in your head and you then looked, look at what was and indeed is happening in natural language processing, the observation is, you know, these NLP people are kind of pathetic, they claim they're doing deep learning but they're still working with three layer LSTMs. Surely, we can make some progress, um, by building really deep networks that kinda look like vision networks and using them, um, for natural language processing goals. And so, that is precisely what they said about doing. So, that they designed and built really deep network which sort of looks like a vision stack, um, as a convolutional neural network that is built over characters. Um, so, I've got the picture of it here but sufficiently deep that it's fitting it on the slide and making it readable [LAUGHTER] is a little bit of a challenge but we can try and look at this. So, at the bottom, we have the text, um, which is a sequence of characters and so, um, for the text, um, so, when people do vision object recognition on pictures normally all the pictures are made the same size. Right. You make every picture 300 pixels by 300 pixels or something like that. So, they do exactly the same for NLP, um, they have a size, um, for their document which is 1024 characters. If it's longer than that they truncate it and keep the first part. If it's shorter than that they pad it until it's of size 1024 and then they're gonna stick it into their stack. So, the first part is that for each character, they're going to learn a character embedding now and their character embeddings are of dimensionality 16. So, that the piece of text is now 16 by 1024, um, so, they're going to stick that through a convolutional layer where you've got kernel size of three and 64 output channels. So you now have something that's 64 times of 1024 in size. You now stick this through a convolutional block. I'll explain the details of that convolutional block on the next slide but, you should be thinking of that ResNet picture I showed earlier where you can either be going through some convolutions or taking this optional shortcut. Another ResNet, another residual block where you can be going through convolutions are an optional shortcut, um, they're then doing local pooling in the same way people typically do envision. So, commonly what people do in vision systems is you are sort of shrinking the size of the images, um, by doing pooling that halves the dimensions in each direction. But, at the same time, you do that in your neural network, you expand the number of channels, and so you make it deeper in terms of the number of channels at the same time as you make it smaller in the x, y size of the image. So, they do exactly the same apart from these one-dimensional convolutions. So, before we had 64 channels in our 1024 character, um, embedding, um, document. So, now we pool it, um, so, we're going to have 512 positions which are sort of like pairs of characters, um, but we now have 128 channels and then they kind of repeat that over and over again, right? So, there are two more convolutional blocks which I'll explain more but they're sort of residual blocks. They pool it again and they do exactly the same thing. So, now there are 256, um, positions which are like four character blocks and they have 256 channels, um, I can't point high enough but they repeat that again and they pool again. So, now they've got, um, 128 positions which are about eight characters each and they have 512 channels representing that. They pool again, they have convolutional blocks again, um, then lo and behold because I said that even the weird ideas are going to turn up, right up there, they're doing k max pooling and they're keeping the eight strongest values, um, in each channel. Um, and so at that point, they've got something of size 512 by eight, um, so, sort of like eight of the eight character sequences have been deemed important to the classification and they're kept but they sort per channel and there are 512 of them you're then putting that through three fully connected layers. So, typically vision systems at the top have a couple of fully connected layers at the end, um, and the very last one of those, is effectively sort of feeding into your Softmax. So, it's size 2,048 times the number of classes which might just be positive negative two class unlike the topical classes. Um, so, yeah, so it's essentially like a vision stack but they're going to use it for language. Um, okay. So, the bit that I hand quite explained was these convolutional blocks but it sort of looks like the picture that we had before or, um, departments slightly more complicated. So you're doing, um, a convolutional block of size three convolutions some number of channels depending on where you are in the sequence. You're then putting it through a batch norm as we just talked about putting it through a ReLu non-linearity, repeating all those three things again or remember there was this sort of skipped connection that went right around the outside of this block. And so this is sort of a residual style block, um, so, that's the kind of complex architecture you can put together and try in your final projects if you dare in PyTorch. Um, yeah, um, so, for experiments so- so one of the things that they were interested in and wanted to make a point of is well some of these traditional sentence and text classification datasets have been used in other papers like Yoon Kim's paper are effectively quite small. So, something like that Rotten Tomatoes dataset is actually only 10,000 examples, 5,000, positive 5,000 negative and they sort of have the idea that just like ImageNet was needed for deep learning models to really show their worth and vision that probably does show the value of a huge model like that. Um, you need to have really big datasets. So, they get some much bigger, um, text classification datasets. So, here's an Amazon review positive-negative dataset, um, with which they have sort of 3.6 million documents, um, Yelp reviews 650,000 documents. So much bigger datasets, um, and here are their experiments. Okay. So, the numbers at the top, uh, for the different datasets of the best previous result printed in the literature, and then if you read the, um, footnotes, um, there are a few things that they want to sort of star. So, the ones that have a star next to them use an external thesaurus which they don't use. [NOISE] And the Yang method, um, use some special techniques as well that I cut off. Um, and the other thing to mention is these numbers, they're error rates, so low is good. Um, so the lower you get them, the better. And so then these are all of their results. Um, and so what can you get out of these results? Um, well, the first thing that you can notice is basically with these results, the deeper networks are working better, right? So, the one I showed you, uh, well, no, I think the one that I have the picture of this isn't the full thing. Um, but they have ones with depth 9, 17, and 29 in terms of the number of convolutional layers, and the deepest one is always the one that's working best. So, that's a proof of deep networks. Um, that didn't keep on working, um, so an interesting footnote here is, um, I guess they thought, oh, this is cool. Why don't we try an even deeper one that has 47 layers and see how well that works? And, I mean, the results were sort of interesting for that. So, for the 47 layer one, it worked a fraction worse than this one. Um, so in one sense you, they showed the result of sort of residual layers work really well. So, they did an experiment of let's try to train a 47-layer network without using residual connections. And, well, it was a lot worse. The numbers went down about two percent. And they trained one with residual connections, and the fact of the matter is the numbers were just a teeny weeny bit worse. They were sort of 0.1 of a percent worse. So, you know, they sort of work just about as well. But, nevertheless, that's kind of different to the situation in vision, because for the sort of residual networks that people are using in vision, this is sort of like the very minimum depth that people use. So, if you're using residual networks in vision typically, you might use ResNet-34. If you're really short on memory and want to have a small model, but you just know you'd get better results if you used ResNet-50, and in fact, if you used ResNet-101 it'd work even better again. Um, and so that somehow, you know, whether it's got to do with the different nature of language or the amounts of data or something, you haven't yet gone to the same depth that you can in vision. Um, but other results, um, so the other thing they're comparing here is that they're comparing three different ways of sort of stringing things down. So, you could be using, um, the stride in the Convolution, you can be using local MaxPooling, and you could be using KMaxPooling. Um, and they're general, they're slightly different numbers as you can see. Each one, um, wins and one, uh, at least one of these datasets or actually at least two of these datasets. But not only does MaxPooling win for four of the datasets, if you sort of look at the numbers, MaxPooling always does pretty well. Because MaxPooling does pretty well here, whereas the convolutional stride works badly, and over here MaxPooling works pretty well, and the, um, KMaxPooling works kind of badly. So, their recommendation at the end of the day is you should always use, um, just MaxPooling of a simple kind, that that seems to be fine, um, and nothing else. Um, it's actually worth the trouble of thinking about doing. Okay. Um, was there any other conclusions I wanted to say? Okay. Um, I think that was most of that. I guess their overall message is you can build super good, um, text classification systems using ConvNets, and you should take away that message. Okay. So, there are just a couple of minutes left. There was sort of one other thing that I wanted to mention, but I think I'll just sort of mention it very quickly, and you can look in more detail if you want to. So, we sort of have this situation that re- recurrent neural networks are a very standard building block for NLP, but they have this big problem that they just don't parallelize well. And the way we get fast computation deep learning is we find things that parallelize well so that we can stick them on GPUs. GPUs only are fast if they can be simultaneously doing the same computation many times, which is sort of trivial for a convolutional neural network, because precisely, you're doing the same comput- computation every position. But that's not what's happening in the recurrent neural network because you have to work out the value of position one before you can start to calculate the value of position two, which is used for the value of position three. Um, so this was a piece of work, um, done by sometimes CS224N co-instructor Richard Socher and some of his people at Salesforce Research on saying, how can we get the best of both worlds? How can we get something that's kind of like a recurrent neural network, but doesn't have the bad computational properties? And so the idea that they had was, well, rather than doing the standard LSTM style thing where you're calculating, you know, an updated candidate value and your gates in terms of the preceding time slice, maybe what instead we could do is we could stick a relation between time minus 1 and time into the MaxPooling layer of a convolutional neural network. So, we're sort of calculating a candidate and a forget gate and an output gate. But these, these candidate and the, um, gated values are done inside the pooling layer via compute, um, via, um, uh, uh, convolutional operation. So, it sort of get, it doesn't, it, you know, if there's no free lunch you can't get true recurrence and not pay the penalty. This is giving you sort of a pseudo-recurrence because you are modeling an association between adjacent elements at each time slice, but it's sort of just worked out locally rather than being carried forward, um, in one layer. But sort of what they found is, if you made your networks deeper using this idea, well then, you sort of start to, again, expand your window of influence. So, you got a certain amount of information being carried forward. Um, so, their conclusions was that you could sort of build these kind of models and get them to work, you know, not necessarily better actually on this slide, um, it says often better. Um, you can get them to work kind of as well as an LSTM does, but you could get them to work much faster because you're avoiding the standard recurrent operation and keeping it as something that you can parallelize, um, in the MaxPooling operations. Um, yes, so that was a kind of an interesting alternative way of sort of trying to get some of the benefits. I think long-term this isn't the idea that's going to end up winning out. And so next week we're going to talk about transformer networks, which actually seems to be the idea that's gained the most steam at the moment. Okay. I'll stop there for today. Thanks a lot.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_12_Subword_Models.txt
Okay. Hi, everyone. Let's get started again. Okay, so first of all let me just say a bit about Assignment 5. So Assignment 5 is coming out today. Um, it's a brand new assignment, so you guys are the guinea pigs for that. Um, and so what it's going to be, it essentially builds on Assignment 4. Um, so it's okay if you didn't do perfectly on Assignment 4, but I think actually most people did. Um, and what we're gonna be doing is adding, um, convolutional neural networks and subword modeling to the neural machine translation system, seeking it to make it better. Um, so this assignment is coding heavy, written questions light. Um, so I mean the coding that you have to do sort of isn't actually really more difficult than Assignment 4, it's kind of like Assignment 4. But what we're hoping is that this time you will be able to do it on your own. Now what I mean by that, um, is for Assignment 4. Well, there was tons of scaffolding telling you what everything should be, and there were all of these auto-grader checks and you could keep on working on your code until they passed all the autograder checks, and everybody did. Um, and so it was very kind of coddled, shall we say. Um, but I mean, I guess what we're really wanting to achieve is to have a more- sorry, question. [inaudible]? Yes. So what we're hoping is that this can be, uh, useful. Um, it'll be short-term pain but useful as being a more effective ramp to doing the final project and indeed for the rest of your life, right. And the- the reality is that in the rest of your life, you sort of if you're going to be doing things with deep learning, you kind of have to work out what kind of model to build and which pieces to stitch together, and how to write some tests to see if it's doing something sensible. And if it's not doing something sensible, um, to figure out how you could change things and try different things, and get it to work sensibly. And so that's what we're hoping, um, that people, um, can do in Assignment 5, so you've got to, um, figure things out. Um, should write your own testing code. Um, we don't have a public autograder, so you should- that's part of working out your own sanity checks, trying to do things like what I talked about last week of sort of getting simple bits working, confirming that they work on minute amounts of test data and so on, and doing things more sensibly. I mean in particular, um, the one particular part of that, that we were planning to do, um, for, um, this assignment, I was looking for it, um, but it's on the next slide. Um, so, um, for this assignment and beyond, um, we're going to enforce rules like more like they are in CS107, for those of you who are undergrads, meaning that the TAs don't look at and debug your code for you. Um, and so, you know, of course we still want TAs to be helpful, come to them with your problems, um, talk about how you're meant to use different things, um, in the PyTorch library, um, but you shouldn't be regarding it as the TA's job of, here's a big Python file. Um, can you tell me what's wrong with it, and fix it up for you. Okay. Um, the precise policy for that's, um, written up on Piazza. Okay. So after- any questions about that or do I go straight on in? Okay. Um, yes so today's lecture, um, in some sense today's lecture is an easy lecture. Um, so last time's lecture, there was really sort of a ton of new stuff of other stuff on neural networks that you haven't seen before, and we did Convnets and we did pooling layers, and we did highway and residual connections, and batch norms, and I don't know, whatever else we did. Um, size one convolutions I guess. So there are tons of new stuff really in this lecture in terms of sort of neural network machinery, there isn't any new stuff at all. So this is really easy. Um, and this is also really a new lecture but it was sort of put in for a reason. And the reason for this relates to a kinda remark I made last time about how lots of stuff keeps changing in neural network land. So at the time we first designed this class and the way as- that a lot of the structure of it still is. Um, that sort of around 2014-2015 when we designed this class, it was basically axiomatic that all deep learning models for natural language processing worked off words. And therefore it completely made sense that we start with word vectors, and then we start looking at things like recurrent models over words. Whereas the fact of the matter is in the last approximately three years, there's been a ton of new work including some of the most influential new work. There's building language models that aren't- isn't -isn't aren't, um, being built over words that they're building, built over pieces of words or characters. And so this lecture is sort of meant to give you some sense of these other ways of doing things and, um, some orientation to some of the things that's going on. But the actual kind of models that we're looking at, uh, sort of using all of the building blocks that we've already looked at, things like, um, RNNs and ConvNets and things like that. So let's get into this. Um, so I'm going to start off with a teeny bit of linguistics of learning about the structure of language, um, at first sort of lower level units of language and then we'll see how that pans out, um, for things like character level models. So in linguistics, um, if you start at the bottom of the totem pole, the first-level of linguistics is phonetics, which is sort of understanding the sounds and the physiology of human speech. So that's sort of like physics, or physiology, or something, right, there are mouth parts that move, there are ear parts that act as filters, and there's, um, audio waves in between the two of them. So that's kind of uncontroversial in some sense. Um, but above that level, the standard thing that people do for the analysis of human languages is to say, well human languages may seem to make use of a relatively small set of distinctive units which are then commonly called phonemes which are actually categorical. And the idea here is that well, uh, our mouths are continuous spaces, right. That they've got these various bits of their mouths like, you know, tongues and pharynges and so on, but it's a continuous space. So actually, um, we can make an infinite variety of sounds, right. So if I open my mouth and apply voicing and just wiggle my tongue around, I can go [NOISE]. And I can make an infinite variety of different sounds. But the reality is that human languages aren't like that, that out of that infinite variety of sounds, we distinguish a small space of sounds. Um, and something that happens when languages change is, um, that the space of sounds that are seen as important and distinguished in a language change. And that happens even with inside one language as- as English. And I'm about to give an example of that. Um, so people in cog psych talk about the phenomenon of categorical perception. And what that means is that really there's something continuous, but that humans perceive it as belonging to fairly sharp categories. Um, and, you know, you can use that for sort of, you know, styles of clothing or whether someone counts as fat or not. Um, but the most famous examples of categorical perception are in language, where we can make an infinite variety of sounds but people per- perceive them as categories. And so effectively what that means is when you have categorical perception, the differences within a category are sort of perceived to have shrunk. You barely notice them at all where differences across categories are expanded and very clear. And so one of the cases that sort of studied a lot is what's called- referred to as sort of, um, voice onset time. So lots of languages including English have pairs of sounds like p and b, uh, pah and bah, and they differ based on when voicing starts. So buh, it has a voice sound like a vowel with an r in it. And well that's a continuous parameter, you can sort of make any point along a spectrum between a p and a b but, um, human beings, um, who speak English, um, perceive just two points on that spectrum. And you don't sort of really notice the fine differences between them. Um, some languages distinguish more points on the spectrum. So Thai distinguishes three different consonant sounds, um, in depending on the voice onset time. Um, something that might be, um, more accessible to you is, um, this is an example of language change. So for a speaker like me, um, there was caught and cot and those are different vowels, and I hear them as different vowels. But if you are someone who grew up in the southwest of the United States, um, then these are exactly the same vowel and you don't distinguish them. Then- you thought I said the same thing twice even though I'm saying two different vowels. And so that's where in even at a dialectal issue- level that people develop categorical perception as to which- which distinctions and sounds they're sensitive to or not sensitive to. Okay. And summing- and, I mean why I'm mentioning this is in some senses these sound distinctions of categorical sound distinctions that are what a lot of our language writing systems that we'll come to in a minute record. Okay. Um, so in traditional linguistics, um, you have sounds, but sounds don't have any meanings in language. So pah and bah don't have meanings, and a and e don't have meanings. And so people then normally distinguish as the next level up morphology is parts of words. And this is seen as the minimal level that has meaning. And so the idea is lots of words are complex and can be made u- made up of pieces but these pieces do have meanings. So fortune has a meaning, um, fortunate, you end in this ate ending, um, which sort of gives the- gives fortune to somebody. So that means you know, having fortune, um, that has a meaning, un has a meaning which means to reverse that. So unfortunate means that you don't have fortune, and ly, um, then has a meaning of turning this all into an adverb, and you can say unfortunately not having, um, gotten fortune, something happened. And so these sort of pieces of, um, words are then the minimal things that have meaning. Um, almost no work in deep learning has tried to make use of this sort of morpheme level of structure. Actually me and a couple of students six years ago did actually try and build a system where it built these tree structured neural networks, that put together meanings of words out of their pieces. Um, but that really isn't an idea that's taken on widely. There's sort of a reason why it hasn't taken on widely, which is doing this and working out the semantically meaningful pieces of words, is kind of hard and a lot of the time in NLP what people have found is you can just about get the same kind of results if you just work with character n-grams. The kind of units that you put into the convolutional neural net. Because if you just have a model that uses, um, character trigrams and you have sort of start of word, un, and nfo, and so on. For going through the ly end of word, that those different units. There's different character trigrams, in a distributed way will pick up all the important meaning components of the word pretty well, and that that's just good enough. And that's actually a very classic idea that's sort of been revived. Um, so back in the second coming of neural networks in the mid-80s into the early 90s, um, there was qu- um, there was quite a bit of sort of controversial work on the structure of language and in particular Dave Rumelhart and Jay McClelland. So Jay McClelland's still in the psych department here, if you want to look him up in your spare time. Um, they proposed a model of how to model generating past tense forms in English. So this was sort of a cog-psych experiment of can we build a system that can learn past tenses of English verbs? And the difficult part there is some, many verbs are regular, you add the kinda -ed ending, but some words are irregular and you had to sort of learn about the irregular patterning. Um, but the way they did that. I mean partly because this was sort of early days with respect to, um, sequence models, is that they used a representation where they represented words precisely with these sort of character trigrams. And that was the representation of words that they used and fed forward in their model. And that, um, idea was met with a lot of controversy by linguists, philosophers, and other people with their ideas of language and so there was a lot of debate in those days about that. But from a- as a purely engineering solution that sort of proved to be a pretty good way to do things. And so this decade there's been other work which includes the model, um, developed at Microsoft of a sort of a deep, um, semantics model where what they're using as these kind of character n-grams to put meaning over words. Okay so, um, so now we might be interested in building models that aren't over words. So we're going to have a word written as characters and we're going to do something with it such as build character n-grams. And so something that is just useful, um, to know is there's actually a fair amount of variation between languages when you do this. So it's not all the same stuff, right? So the first problem is, um, there are some languages that don't put spaces between words. The most famous example is Chinese. Um, but an interesting fact for those people of European ancestry, um, is that you know if- for- when the ancient Greeks wrote ancient Greek, um, they didn't put spaces between words either. It was actually a later invention of medieval scholars who are recopying their manuscripts, who they decided [NOISE] maybe that'll be easier to read if we put spaces in and then they started doing it. Um, [NOISE] most languages these days do put spaces in between words but even, then there's sort of a lot of fine cases. So in particular, a lot of languages have some sort of little bits of stuff which might be pronouns, or prepositions, or various kind of joining words like and, and so, and which sometimes they write together and sometimes separately. So in French, um, you get these kind of prepositional, I'm sorry, pronominal, um, markers for you, I, you, have brought. Um, and you know, these kind of little words and pronunciation just sort of run together as je vous ai, and arguably it's almost one word, but it's written as separate words. Um, where there are other languages which sort of stick things together, where arguably they're separate words. So in Arabic, you get pronominal clitics and some of these sort of joining words like so, and and, and they are sort of written together as one word, where arguably that they should really be four words. Another famous case of that is with compound nouns. Um, so in English, we write compound nouns with spaces between them, so you can see each noun. Um, even though in many respects compound nouns something like whiteboard behaves like it's one word, or high-school. Um, whereas other languages, German is the most famous case, but also other Germanic languages, just write them all as one word and you get very long words like that. So we can get different words if we just use spaces and don't do much else. Um, good. Okay. Yes. So for dealing with words, there are these practical problems. Um, and we sort of already started to touch on them that if you're trying to build word-based models, there's this huge space of words, and well, strictly there's an infinite space of words because once you allow in things like numbers, let alone FedEx routing numbers, or, um, or if you allow just morphology, when you can make those ones like unfortunately. Yeah, sort of, you can just expand the space of words, so you get this large open vocabulary. Um, English, you know, a bit problematic. It gets way more problematic than a lot of other languages. So here's a lovely Czech word, um, to the worst farmable one, um, where you can make sort of much more complex words in lots of other languages. Um, many Native American languages, other European languages like Finnish have these sort of very complex words, Turkish has very complex words. Um, so that's bad news. Um, there are other more reasons we'd like to be able to look at words below the word level, to know things about them. So when you're translating, there's a wide space of things, especially names, where translation is essentially transliteration, that you're going to rewrite the sound of somebody's name as just roughly, you know, perhaps not perfectly correctly but roughly correctly according to the sound systems of the different language. And well, if we want to do that, we essentially want to work- operate at the letter level, not the word level. But another huge modern reason why we'd like to start modeling below the word level is, we live in this age of social media and if you're in the social media land, there's a lot of stuff that's written not using the canonical words that you find in the dictionary, and somehow we'd wanna start, um, to model that. So in some sense this is the, um, easy case. Um, good vibes. Um, but nevertheless this is spelled with one, two, three, four, five, six, seven O's, and one, two, three, four, five, oh and also seven S's, they match. I don't know if that's deliberate or not [LAUGHTER]. Um, okay. So this sty le of writing is very common, um, and well, you know, we kind of sunk if we're treating things at the word level and we're trying to model this right. That's clearly not what human beings are doing, we're sort of looking at the characters and recognizing what goes on. Um, in some sense that's kind of the easy case that you could imagine preprocessing out. Um, there's a lot of harder stuff that then turns up. Um, I guess there's sort of the abbreviation speak, like I don't care. Um, but then you sort of get a lot of creative spellings, um, that come off of kind of reduced pronunciations like I'mma go, sumn. Um, and it seems like somehow we need something other than canonical words if we're going to start to deal better with a lot of this text. Oops. Okay. So that suggests we sort of want to start doing that with our models. And so, that's led to a lot of interest in using character level models. Um, and I mean there are sort of two extents to which you can do this, and we'll look at them both a bit. Um, one level is to say, look we're still gonna have words in our system. Basically, we're going to build a system that works over words, but we want to be able to create word representations for any character sequence and we'd like to do it in a way that takes advantage of being able to recognize parts of the character sequence that look familiar, so that we can probably guess what vibes means. Um, and so that sort of then solves the problems with unknown words and we get similar words, similar embeddings for words with similar terms, spellings, et cetera. But the other alternative is to say, oh no, just forget about these words altogether, who needs- um, why don't we just do all of our language processing on sequence of characters, it'll work out fine. Um, both of these methods have been proven to work very successfully. Um, and I just wanted to dwell on that for one moment, and that sort of goes back to my, um, morphology slide here. When people first started proposing that they are going to build deep learning models over characters. I mean, my first feeling was oh, that is never going to work because it sort of seemed like, okay, words have a meaning it makes sense, um, that you can do something like build a word2vec model and that's going to really be able to sort of see words and their distribution and learn the meanings of the words because words have a meaning. The idea that you're going to be able to say, well, I'm going to come up with a vector representation of h, and a different vector representation of a, and a different vec- vector representation of t, and somehow that'll be useful for representing what a hat means once I put it through enough neural network layers, um, frankly it sounded pretty unconvincing to me. Um, but, um, I guess, you know- But it, it, totally works so I'm convinced now, empirical proof. And I think what we so essentially need to realize, is that with going- that yes, at some level we just have these characters that don't mean much. But we then have these very powerful combinatory models with a lot of parameters in them, things like recurrent neural networks and convolutional neural networks and that they are respectively able to, sort of, build, store and build representations of meaning from multi-letter groups, in such a way that they can model the meanings of morphemes and larger units and therefore put together word meanings. Um, yeah. So, um, one more detail on using characters, um, from writing systems. So if you're a linguist you tend to think of sounds as primary. Those were the phonemes that we- I mentioned beforehand. You know, um, essentially, um, deep learning hasn't tried to use phonemes at all. Traditional speech recognizers often did use phonemes, but in the deep learning land, you want to have a lot of data and the way you get a lot of data is you just use, um, written stuff because, you know, it's the easily found data where you can get millions and billions of words of stuff. Um, so that sort of makes sense from a data point of view. But the thing that ends up as a little weird about that, is that when you're then building a character level model, what your character level model is, actually varies depending on the writing system of the language. And so you, kind of, have these quite different writing systems. So you have some writing systems which are just completely phonemic, that there are letters that have a particular sound and you say that sound. Something like Spanish is pretty much phonemic. Sometimes it's a teeny bit complicated. So you might have a digraph. So this digraph, ngabulu, is, kind of, like, the N-G of English that is used for "ng" sound like at the end of seeing, but, you know, basically this is just 'jiyawu', each letter is a sound, you can read it, um, and it's just, um, phonemic. Um, that then contrasts from something like English where all the non-native speakers know the spelling is terrible. It's got this, sort of, highly fossilized, once upon a time, phonemic system in the tenth century or something. Um, but now we have this system that words have fairly arbitrary spelling that doesn't actually represent the sounds, um, very clearly. But it's sort of a phonemic system. But then there are languages that use larger units. Um, this is, um, Canadian and Inuktitut which I just put in there because it's such a pretty writing system. Um, but there are a lot of languages that represent syllables by their characters. Um, so you'd have something like this in Korean for example, with Korean Hangul, that each, um, letter is then being a syllable of a sort of consonant vowel combination like bar. Um, if you can then go up a level from that and if we get back to Chinese again, well, um, this is sort of also a syllabic system, you could say. But really, the Chinese characters are much more than just the sound. They also have a meaning. That this is really then an ideographic system where there are characters with particular meanings attached to them. So they're, sort of, uh, whole morphemes in- written as one letter. And, you know, another example of such language, um, was Egyptian hieroglyphs, if you've seen those. That they're, sort of, ideographic systems where you have letters with meanings. Um, and then you have language systems that sort of mix several of those. So Japanese is sort of a mixture of partly moraic, partly ideographic systems mixed together. So if you just, sort of, start off and say, "Okay, I'm gonna build a character-based system." That's fine. But effectively, your character units like letter trigrams are just very different in a language like Chinese, where commonly a letter trigram will be, sort of, a word and a half, three morphemes with meaning. Whereas if you're in something like English, your character trigram will be something like T-H-O which is still sort of much too small a unit to have any meaning. So moving right ahead. So these two kind of approaches, um, one was just do a completely character level model and then the other one was, sort of, you make use of characters to build bigger things that you're then gonna put something, like, into a more word level model. So I'll do this one first and the other one. So for purely character level models, I actually showed an example of that last time. Do you remember? So there was that very deep convolutional network from the Conneau et-al word for text classification at the end, um, and that just started with a big line of characters and built these convolutional layers on top of that, in the vision like network and classified the documents. Um, so that was, sort of, a completely character-level model. Um, but here's a bit more work on this. So people for machine translation have built, um, machine translation systems that just read characters and write characters. And when people first tried to do that, um, it, sort of, didn't work, right? The people thought it might help to build character-level models especially for languages like Chinese. But people just weren't able to build models that worked as well as word-based models and either the pre-neural, the non-neural or the neural world. But gradually that started to change. So people started to have successful character-level decoders and then, sort of, around 2015 and '16, um, people started to show, look you could- can actually do machine translation very well at just a character level with a few asterisks. And so, um, here's a bit of work, um, that we did. Um, the Luong and Manning one, from, um, 2015 on the last slide. So this is looking at English to Czech translation and Czech's a good language to use if you want to motivate doing things at the character level, because it had those big horrible words with lots of morphology like the example I showed you before and I'll show you some more later. So people had built word-level models for Czech. Um, and, you know, they didn't work great, partly because of some of these vocab problems. So, um, the, sort of, word-level state of the art was at this time was 15.7 BLEU, which as you know is much less than we will accept for full grades in your homework. [LAUGHTER] Um, but, you know, what counts as a good BLEU score depends on how difficult the language pair is. Uh, um, and so you're not doing Czech. Um, but, um, so this was, sort of, the, kind of, neural MT model that we've talked about. So it was a Seq2Seq model, with attention and then it had extra stuff for substituting UNKs with either, uh, single word translation or by copying stuff from the source. So it was, sort of, basically, state of the art neural MT of 2015, got 15.7 BLEU. And the difference isn't big, um, but we were able to show, look we can build this completely, um, character-level model and then actually, it did fractionally better. Um, so this, sort of, showed that in terms of translation quality, um, character, purely character-based models were completely viable at capturing the meaning of text as well as word-based models. Um, was this a great result? Um, in many, in some ways, yes, in another way, no. I mean, this model was truly terrible to train, right? So it took about three weeks for us to train this model and at run-time, it also worked very slowly. And so the problem with character-level models, if you're putting them into something like an LSTM, is your sequences get way longer, right. So you've got about seven times as long sequences as you used to have. And since there's not much information, the characters, you have to do back propagation through time much further back. And so we were running back propagation through time for 600 steps before we were trun- truncating it. And so this, sort of, made, maybe that was excessive, but it made the models, um, very slow. But we were able to show that it was able to get some of these good effects, right. So here's a Czech, um, translating to Czech, her 11 year-old daughter, Shani Bart, said it felt a little bit weird. And, um, I don't know, probably. Does anyone speak Czech, any Czech speakers? Um, no Czech speakers? Okay, um, I don't speak Czech either, um, but we can see that the [LAUGHTER] we can see that this does interesting things, right. So the second line is the human translation into Czech which we can use for some guidance. And so in particular, um, in Czech there's a word for 11 years old, um, which you can see is that blue word on the second line. And you can see that despite 11-year-old was, um, that for 11-year-old it's just able to perfectly, um, produce letter by letter, um, the Czech word for 11 years old and that works beautifully. In contrast, for the word-level model, um, 11 year-old was an unknown word because that wasn't in the vocabulary. And so then it had two mechanisms to try and deal with, um, unknown words. It could either do, uh, unigram translation of them or it could just copy them. And for whatever reason, it decided here that the best strategy was to copy and so that was a complete fail. Um, and if we go along for the character-level model, another thing that it gets right that's really cool, um, is the name Shani Bart. It's able to do this transliteration tasks that I mentioned just perfectly. And it turns that to Shani Bartova which is exactly what the human translator did as well. And so, you know, it's actually doing some really kind of nice, um, human translator, um, like things. I mean, in fact, as best I can tell from spending a bit of time on Google Translate, it actually does a pretty good job in this sentence, period. All right, this part here starts to be different, um, from the human translator. But it's not actually bad. It's sort of a more literal translation. So this citi um, actually translates feel like in the English texts. Whereas the human, sort of, didn't actually use the word feel in the Czech version that they just went, um, was a little bit, um, weird or strange. So that's cool. Okay. So here are a couple more results from this. So here's another system that was built the next year. By these people Jason Lee, Kyunghyun Cho and Thomas Hoffman. So they wanted to do something that was, I don't know, much more complex and neural and understanding the meaning of the text on the source side. And so they were more using the kind of technologies we saw last time. So on the encoder side you started off with a letter sequence of character embeddings. And then you're sort of using convolutions of four, three and five of characters to get representations up here. You're then doing a max pooling with a stride of five. So you're getting a max pooled representation of pieces of the text for each of the three, four and five convolutions. You're then feeding that through multiple layers of highway network and feeding that through a bidirectional gated recurrent unit and that's giving you your source representation. On the decoder side, it was sort of the same as our decoder, it was just running a character level sequence model. So overall, so they were doing the opposite task. This is Czech to English. But, so they are starting to get better scores. But I mean actually if you're sort of looking at these different numbers, where I'll explain this system more in a minute, I mean it sort of seems like the place where they get a lot of value is that using the character level decoder gives them a lot of value by this very complex model on the source side is giving them almost no value at all. One even more recent paper, so this is Colin Cherry and fellow researchers at Google. So they last year did one more exploration of doing LSTM sequence to sequence style models of comparing word and character-based models. And this is English to French and this is um, Czech to English which is just what we were doing. And so in both cases when you have a big model, the character model wins for them. The blue model comes out on top but the sort of interesting thing as you sort of see these different effects depending on the morphological complexity of the language. So for a language like Czech, it's a really good idea, if you want to build a good model, to use character level that they're getting about a BLEU point of difference there, whereas for a model without putting French or English there's actually a tiny but very little gain from using a character level model. Okay so let me just explain these models, so these models are models of different sizes. So these models are using bidirectional LSTM encoders and one-directional LSTM decoders. So the simplest model just has a shallow bidirectional LSTM encoder and a two layer LSTM decoder. The middle model has a three deep stack of bidirectional LSTM encoders and a four deep stack of LSTM decoders. And the most complex model has a six deep stack of bidirectional LSTM encoders and an eight deep stack of LSTM decoders. This is where it helps to work at Google. Probably for your projects, you don't want to go beyond three or four. Stay over here. Okay yeah so, so these are the results. So basically what you're finding is if you're making sort of smaller models you're better off with words, but as you go to big models especially if you're in a morphologically rich language, you clearly start to win from the characters. But there is still a loss which is essentially exactly the same loss that we were suffering from in 2015, right? This is the time graph and so these are the same three models as over here, it's just the axis is changed to sort of sum the total number of LSTM layers. And so essentially, if you're at the word level, you can run any of these three models and they are fast that you can be translating in sort of not much time but for the character level models your slope is much higher. So it starts to get quite expensive to run the deep character level models. Okay, so that's that section. So then chugging along. I then wanted to look at other ways of doing things. And so these are models that in some sense still do have words but where we're going to want to sort of build word representations out of pieces. And there are essentially two families of ways that people have explored doing this. One way of doing it is to say look we just want to use exactly the same architecture as we use for a word model except our words aren't really going to be words at least sometimes they're going to be pieces of words. And so those are often called word piece models. And in particular, there's one commonest way of doing it. It's called BPE, which I'll go through in some detail. The other alternative is to say, well, we're gonna kind of make a mixture or a hybrid. So our main model is going to work in terms of words but we're going to have some kind of facility where we can construct a representation, for otherwise unknown words, by doing things that at a character or a lower level. And I'll show you a bit of that as well. Okay, so this is BPE. BPE is actually a pretty simple idea which has nothing to do with deep learning but the use of BPE has sort of become pretty standard and successful for representing pieces of words to allow you to have an infinite vocabulary while an infinite effective vocabulary while actually working with a finite vocabulary. So the origins of Byte Pair Encoding and the name byte pair has nothing to do with natural language processing or neural nets, we're just writing a compression algorithm. So this is something like compressing your documents with gzip. So what basic Byte Pair Encoding is, that you've got a collection of stuff with bytes and you are looking for the most frequent sequence of two bytes and you say, okay, I'm going to add that sequence of two bytes as a new element to my dictionary of possible values. And that means I can have 257 different values for bytes so to speak that I can shrink the length of my sequence and I can repeat over and do that again. And so essentially, this work suggested, well we can apply this kind of compression algorithm and use it as a way of coming up with pieces of words that were useful, doing it not strictly with bytes despite the name but instead with characters and character n-grams. And so the most common way to do this with characters and character n-grams and if you're up with modern times, you know that means there's unicode and you can represent all of these lovely letters like Canadian Inuktitut's syllabics and stuff like that. But there's actually a problem with Unicode, which is there actually a lot of Unicode characters. I forget the number theoretically. I think there's about 200,000 possible Unicode characters. But at any rate, if you want to handle a bunch of languages which include East Asian languages, maybe you need something like 20,000 characters and that's sort of a lot. So there are actually some people who've literally gone back to bytes and said, "You know 200,000, that's a really big vocabulary. I don't want to deal with anything." Sorry, 200,000 is a really big vocabulary. I don't even want to deal with anything that large. So why don't I actually just do these kind of algorithms over bytes? And so that means that in UTF-8 encoding, Chinese characters take three bytes each. And so you're actually have to- you only get whole characters if you actually merge together several bytes that are common sequencers. Okay. So more concretely, um, how does this work? So we're sort of doing this bottom-up clustering of short sequences. So we start with a unigram vocabulary, which is all of the Unicode characters and some data. We then sort of ask, what's the most frequent ngram here? Um, initially it will be a bigram pair and we add that to our vocabulary. So if we start off, you know, we can take our text that's- um, I'll come back to this in a minute. Let's assume we have a text that has been divided into words so we do have word tokens. And so we can represent as a dictionary and say here are some words with their frequency. Um, and so now we look for a common letter sequence and we say, "Oh, es." That occurs nine times, um, in this data because we have the counts for the words on the left side. So, um, we start with our vocabulary being all the individual letters. We find a commonest letter sequence like es, and so we say, "Let's clump that together and make that a new thing in our vocabulary." So now we've got an extra thing in our vocabulary. And now what's the commonest ngram sequence that clumped something? Well, actually all of these es's are followed by t, so we also have es, t with frequency nine, and so we can add that to our vocabulary. And then we ask again, well, what's another common letter sequence? Let's see, there are seven cases of o double- well, I guess there are seven cases of either l o or o w, so we can lump those and then we can lump again and make an lo w. So if we sort of run this, we start to build these clumps of common letter sequences, and so common bits like est, but also just common words, something like that in English will very quickly be clumped together and be a unit of our vocabulary. Um, and so we do that for a while. So normally what we do is we decide a vocabulary size that we want to work with. We say, "Okay. I want to work with a vocabulary size of 8,000 words." That'll mean my model will be fast, and we just sort of keep doing this until we have 8,000 things in our vocabulary. And that means that our vocabulary will have in it all single letters because we started with them and it'll have common subsequences of words like the es and the est that are now in our vocabulary, but also have whole words whenever there're common words, like, you know, the, and too, and with, and so on, will become parts of our vocabulary. Um, and so then when we have a piece of text we can do a deterministic longest piece segmentation of words, and we will say that is our eeset of word pieces. And so for an input piece of text, we turn into word pieces, and then we just run it through our MT system as if we were using words, but really it's pieces of words, and then on the output side, we just concatenate them back together as needed. Okay. So we get this sort of automatic word-based system. And that's proved to be a very successful system. So this idea of using byte pair encoding sort of really emerged in 2015 and then in 2016, uh, workshop on machine translation which has been the main sort of annual competition for MT systems that the several top systems were built using byte pair encoding. If you look at last year's competition, there's a bit more variety, but really a number of the top systems are still using byte pair encoding, that's just been a good way to do things. So for Google's Neural Machine Translation, they effectively use of- a variant of byte pair encoding. So they don't use exactly the same algorithm, um, they use a slightly different algorithm where they're using a language model and they're saying, what- what- rather than just using pure counts, they're saying, "What clumping together would maximally reduce the perplexity of my language model and clump those things and repeat over?" And so they did- they've done two versions of this model. So the first version, the wordpiece model kind of like, um, byte pair encoding assumed that you have an initial tokenization to words and then you're just sort of having pieces of words, um, using this algorithm. And then they did a second version, um, the sentencepiece model which you can find at this GitHub site which said, "Well, it's problematic if we need to tokenize into words first because then we need to have a tokenizer for every language and that's a lot of work." Um, so maybe instead of that, we could just sort of treat, go from a character sequence, retain whitespaces and regard that as something that's part of the clumping process, and so that, um, you just build your word pieces which commonly will have spaces on one side or the other of them, um, because often things inside a word are the commoner- more common clumps and you build those up, and that's proven to be quite successful. Um, in particular, one place where some of you might see this, um, is, um, we've yet to get to describing it in the class really, but there's been this recent work which we actually talk about next week in class, are building these transformer models, in particular, Google has released this BERT model which gives you very good, um, word representations. And if you download BERT and try and use it, what you will find out is it doesn't operate over words, it operates over word pieces. Um, and so it has a large vocabulary. It's not a vocabulary of like 8,000 words. I forget the number, but the models have a large vocabulary, but they're still not a huge vocabulary and it's using word pieces. So lots of words are in the vocabulary. So if you look at the English model, it not only has words like f in it, but it even has words like Fairfax and 1910s, which aren't that common. Um, but it's nevertheless to cover all words, it's again using this wordpiece idea. So if I want a representation for the word hypatia, um, that's not in the vocabulary, and so I'm making it up of pieces. There's an h representation, and then in the BERT version, which is different to the Google NMT version, the non- the non-initial word pieces are represented with two hashes at the start, so I can put that together with h##yp etc., and this would be my representation of hypatia. So effectively, I have word vectors, um, for four word pieces, and then I have to work out what to do with them. The simplest and quite common way is I just average the four of them. And there are obviously other things you could do. You could ConvNet and maxpool or you could run a little LSTM or something to put together a representation. Okay. Yeah. So- so those were the models that, um, sort of, worked with pieces of words to give you infinite vocabulary and ran them through a normal system. The other possibility is to say, "Well, we wanna work with characters so we can deal with an infinite vocabulary, but we're gonna sort of incorporate those into a bigger system." And a whole bunch of work has done this and in some sense it's a fairly obvious thing to do. Um, so this work in 2014 was one of the early ones. So they said, "Well, we could start with characters. We can do a convolution over the characters to generate word embeddings, and then we can use those word embeddings for something in a higher level model." Um, this was actually sort of a fixed window model for doing part of speech tagging. Um, that makes sense. Instead of a convolution, you could use LSTM. So this was work from a year later, and they said, "Well, we're also gonna build up, um, word representations from characters. And the way we're gonna do it is we're gonna run character level Bi-LSTMs, concatenate the two final states, and we're gonna call that outward representation, and then we're gonna put that word representation into a language model which is then a higher level LSTM that works along a sequence of words." And I thought I'd just- Oh, yeah. Words, are they training uh, like character- Yeah. Oh yeah, that's very important to realize. Yes so yeah so if you're learning- you'll learn- I mean this is the hidden layer. I guess I'm not actually showing the input layer but for the input layer you're learning a vector for each character. So effectively you're doing the same kind of thing we saw before that you're starting with random representations for each character. You've got this embedded inside a word sequence LSTM, your goal is to minimize the perplexity of the higher level LSTM as, um, as a language model and so it filters back its gradients. So it's wanting to come up with character vectors such that if it produces good word vectors which produces low, um, perplexities. Good question. Um, so here's, um, a slightly more complex version of trying to do this that's a bit more recent where again the idea is can we build a good language model by starting out from characters and wanting to exploit sort of related sub words and rare words. And so they built sort of this kind of this more stacked complex model that we'll go through the stages of wherefore we start with a word represented as characters. We have character embeddings which we build into a convolutional network and then we head upwards. So if we take that one piece at a time, um, so you have a character embedding for each character. Um, you'll then have a convolutional layer which then sort of rep, has various filters that work over those, um, character sequence of two, three and four grams of characters. So you're getting representations of parts of words. Um, then from those convolutional networks you're then doing max pooling over time which is effectively sort of like choosing which of these n-grams best represents the meaning of a word. Um, then what they do after that is so at that point they've got an output representation for character n-grams, and so then they feed that into a highway network like we talked about a bit last time. And then the output of that then at the word level, um, goes into an LSTM network, and this LSTM network is now word-level LSTM network, and you're trying to sort of maxim- minimize perplexity like for the neural language models we saw earlier. Um, so what could they show with this? Well, the first thing they could show with it is that it actually again just works well as a language model despite that skepticism that I hadn't told you of about the fact of the matter is you could build these kind of character level models and train them and they work to a first approximation as well as word-level language models. But one of the observations that they make is that you can be getting as good results but with much smaller models. So up at the top here are the character level LSTM models and word ones that the models they built. And here are a whole bunch of models over this data-set. Um, and so as time went by perplexities have been going down, gone to 78.4 and their point was well we can build pretty much as good a character model with 78.9 perplexity but our model is actually much smaller, this model here has 52 million parameters whereas our model that works on a character level has only 19 million parameters. So it's about 40% of the size. And that seems,um, kind of interesting. But perhaps what's more interesting is to sort of peek inside it and see what happens with the representation of words when built out of characters and this part is sort of actually a bit cool. Um, so what this is showing is for words that are up to the top while, his, you, Richard, trading. It's asking what other words are most similar to it according to the word representations that's computed. And the top part is the output of a word level LSTM model and that's sort of okay. Richard comes out as similar to Jonathan, Robert, Neil and Nancy et cetera. While although letting though minute mainly okay. But it's sort of interesting what happens with their character level models,um, and so in particular, um, what's kind of interesting is like first of all you remember they sort of had the character embeddings that went through the convolutional layer and the max pooling. And if at that point you ask what things are most similar that basically it's still remembering things about characters. So the most similar words to while, chile, whole, meanwhile and white. So at least for the sort of first ones they all end in LE. And you see that pattern elsewhere right close to Richard, hard, rich, richer, richter that hard ends in ARD, rich. So you're just sort of getting this character sequence similarity, it's not really doing meaning at all. But interestingly when they're then putting it through the highway layers, that the highway layers are suc- successfully learning how to transform those character sequence representations into something that does capture meaning. So if you then say at the output of the highway layers what words are most similar then it seems to be working pretty well, While was similar to meanwhile, Richard is similar to Edward, Gerard, Edward with Carl. They're sort of now working much more like a word level model in capturing semantic similarity. So that seems kind of cool. Um, so then they say well what about if we ask about words that aren't in the vocabulary of the model. Well, if they're not in the vocabulary of the model, the word level model can't do anything and so that's why you get those dashes there. And what they're wanting to show is that the character level model still works pretty well. So if you give it look with seven O's in the middle of it that it's correctly deciding that look, look, look, looking are actually the most similar words to that which is actually working very nicely. And some of the other examples are similar, computer-aided, is seen as most similar to computer-guided, computer-driven, computerized, computer, you're getting pretty similar sensible results. And then the little picture on the, um, right is sort of, um, showing, um, one of these 2D visualizations of the units that have been learned. And so the red, the red things are word character prefixes, the blue things are character suffixes, the orange things are hyphenated things like in the middle of computer-guided and gray is everything else. And so there's some sort of sense, with which it's picking out different important parts of words. Okay. Um, and that's why also I guess just another good example of how you can sort of compose together different kinds of building blocks to make more powerful models that you might also want to think about for your final projects. Okay. Um, so here's back to one other example from a neural machine translation system of doing this hybrid architecture that has word-level and character level. I showed you earlier a purely character level model. I mean we built that out of interest to see how well it did but we were sort of really wanting to build a hybrid model because that seemed like it would be much more practical to build something that translated relatively quickly and well. Um, so the idea was we'd mainly build a word-level neural machine translation system but we'd be able to work with character level stuff when we had rare or unseen words. Um, and that turned out to work pretty, um, successfully at improving performance. So the idea of that model is this. Um, that we're going to run a pretty standard, um, sequence to sequence with attention LSTM neural machine translation system. In my pic- I mean, it's actually a four-level deep system but in my picture I showed less than four levels stacked to make it easier to see things. And we're going to run this with a reasonable vocabulary of 16,000 words. So for common words we just have word representations that we're feeding into our neural machine translation model but for words that aren't in the vocabulary we're going to work out a word representation for them by using a character level LSTM, and conversely, when we start to generate words on the other side we have a soft max with a vocabulary of 16,000. It could just generate words like [NOISE] but one of those words is the UNK symbol. And if it generates the UNK symbol we then run a- we take this hidden representation and feed it in as the initial input into a character level LSTM and then we have the character level LSTM generate a character sequence until it generates a stop symbol and we use that to generate words. Um- Okay. So we end up sort of with this sort of hybrid composed stack of eight LSTM layers. Uh, yeah. [inaudible] and you always get some probability for the UNK symbol. So if you wanted to get the- the proper gradient, you- you'll always have to run it for every word but what- what do you-? I would often say, you only run during training, you only run the character level LSTM when the UNK symbol receives the highest likelihood. So we- What is that? So at training, at training time, there's a determinant piece of tech, right? You know the source and you know the target, and so we're, and at training time, we've already decided our vocabulary, right? That we've just decided what are the 15,999 most common words, those and UNK are our vocabulary. So for both the input and the output side, we know which words aren't in our vocabulary. And so if it's not in our vocabulary, we're running this one. If if what was the output is not in our vocabulary, we're running that one, and otherwise we're just not running it at all, yeah. So and and the bit that I didn't explain but is actually important perhaps related like when we're calculating a loss that we can back propagate, that sort of up here, there are sort of two losses. There's a loss at the word level that you know you'd like to in this position, give probability 1 to generating UNK but really, this model we'll softmax, we'll say UNK is you know probability 0.2 or whatever. So there's a loss there and then secondarily, there's a particular sequence of characters you wanna generate and you've also got a loss because you've met the probabilities you put over the characters. Um, So then, um, I think we- I think Abby sort of briefly mentioned this. Commonly, the decoders do some kind of beam search to consider different possibilities before deciding, um, the highest probability one over a sequence of words. And so this was doing a slightly more complex version of that. So there's a word-level beam search when running it and then also doing a character level beam search to consider different possibilities. And so if you wanna integrate the the two of those together. Um, but essentially, um, this worked pretty well. Um, so, um, this was the winning system at WMT 2015 which used 30 times as much data and ensembled together three other systems compared to the data that was provided for the task. This was the system I showed before, they got 18.3. Um, and if you remember our character, purely character level system got 18.5. Um, then by building this hybrid system, that we were able to build a much better system that was about 2.5 BLEU points better, um, than after- than either this word level or the character level system. So that was kind of nice, um, and in particular that was the state of the art at the time. Now of course, if you were paying very close attention, that's now nowhere near the state of the art. Because when I showed you that slide way earlier of the Google system, you will have noticed that they have much higher numbers in the 20s, but that's what happens as the years go by. Um, okay. But here's an example that shows these different systems working and some of the mistakes they make. Um, here's a cherry picked example, um, where our system, the hybrid system, works perfectly because what- that's what you expect to see. Um, and so, you know, you can see some of the defects of things that can go wrong. Um, so in this case, you know the character level system didn't work here because it just sort of starting with the Steph, it sort of seemed to free associate, um, a completely made up name that doesn't really have anything to do with the source. So that one isn't very good. Um, the word level system went bang here, so you remember when it generates an UNK, the word level system would have when it generates, it's using attention. So when it wants to generate, um, it has attention back to words and the source. And when it generates UNK has two strategies. It can either do unigram translation of the word that it's maximally putting attention on or it could copy the word that it's maximally putting attention on. Um, so in this case, it chose to translate the word that it was maximally putting attention on but the word it was maximally putting attention on was after rather than diagnosis. And so you just get this po po coming out of after, after and we've completely lost the word. Um, and in this example, in this example, how a hybrid system, um, just ends up working beautifully and gives you exactly the right translation. Yeah. Um, of course, it's not always that good in the real world. Um, so here's a different example. So this is the example I showed before with the 11-year-old daughter. Um, and in this example, the hybrid model has the same strength of the character model. It correctly generates 11 years old at a character level in its translation, but you know this time, for whatever reason, it's the hybrid model that goes bang in generating the names and it translates Shani Bart as Graham Bart. Um, whereas the character level model gets it right. Um, actually, I think this is one of the weaknesses of this hybrid model compared to the character level model. That because of the character level generator is kind of this sort of second level. For the purely character level model, it's able to use the character sequence as conditioning context very effectively. Whereas our hybrid model, although we feed the hidden representation of the word level model in as the starting hidden representation of the character level model, it doesn't have any representation further back than that of what's in the word level model. And so it tends to not always do as good a job at representing, of capturing the context that allows it to do translation of things like names. Okay. Um, very- almost finished but there's just sort of one thing I wanted to mention before the end which is almost a practical thing. Um, So we started off with word embeddings, but now we've been talking a lot of character level models. So surely, just for word embedding, you should be able to do useful things with them, with characters or pieces of words. And that's something that people start to play with. So in this Cao and Rei paper they said well let's train a Word2vec model using exactly the same, um, loss as Word2vec uses but let's, um, rather than having word representations, let's start with character sequences and run a bidirectional LSTM to work out word representations, and we'll then sort of be effectively training this more complex model where we're learning character embeddings and LSTM parameters and that will give us our word representations. And that's an idea that people have continued to play with, and so in particular I just wanted to mention these FastText embeddings. Um, so a couple of years ago, um, people now at Facebook, the same Tomas Mikolov who did the original Word2vec, brought out a new set of embeddings, the FastText embeddings and their goal was to sort of have a next-generation Word2vec, um, which is sort of an efficient fast, um, word vector learning library, um, but it was better for rare words and languages with lots of morphology. And the way they did it was that they sort of essentially took the Word2vec skip gram model but they augmented it to put in character n-grams. So more precisely, this is what they did. So, um, when you had a word, my example word is where, for some n-gram size you represent it as a set of n-gram. So this is kind of just about like those, we called phonemes I mentioned right at the beginning where you have a kind of a boundary symbol, so you know the beginning of the word. So if the length is three you have beginning of word WH, WHE, HER, ERE, RE end of word, as pieces of representation. And then you have an additional one for just the whole word. So you do still have whole word representations in this model. So where is represented by six things and so then you're going to use all six of those things in your computation. Um, so if you sort of remember the guts of Word2vec that what you were doing was you were doing these vector dot products between your context representation and your center word representation. So they're going to do exactly the same thing but for the center word they're gonna use all six of these vectors. All the vectors corresponding to all six of these representations and they're going to sum them. And so you're just doing a simple summing operation, and that's sort of then giving you your representation of similarity. Um, very precisely, they don't quite do that because there's a hashing trick but I'll leave that out. But what they're able to show is that that model actually works pretty successfully. So these are words similarity scores, skip gram, they're all CBOW, and then this is the sort of new model, um, that, um, uses these kind of n-grams. And in this, um, you know at least for one of the English data sets, it doesn't get any better. Um, but what they especially notice this is for languages that have more, morp- more morphology that you're sort of getting some fairly clear gains. 70, 69 onto 75, 59, 60 on to 66 in the right column, so then these wordpiece models do give them a better model of words and just practically FastText, um, library now has sort of word embeddings for about 60 or 70 different languages, so it's sort of a good source of word embeddings for multilingual applications. Okay, I think I am done. So thanks a lot and see you again next week.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_4_Backpropagation.txt
Okay. So great to see everyone back for lecture four of the class. Um, so, for lec, for today's lecture, um, what I want to do for most of the time is actually get into the heart of these ideas of having the backpropagation algorithm for neural nets and how we can construct computation graphs that allow sufficiently to do backpropagation, neural nets to train the neural nets. So, overall, um, this is sort of what I plan to do it today. So, at the end of last lecture, I slightly ran out of time and I started mumbling and waving my hands about the, um, doing the derivatives with respect to the weight gradients. So, I kinda of wanted to do that but again. So hopefully it actually communicates slightly better. So, we'll do that and talk a bit more about sort of just tips for doing matrix gradients, um, and a particular issue that comes up with word vectors. And so then the main part of the class, we'll be talking about the backpropagation algorithm and how it runs over computation graphs. Um, and then for the last part of the class, um, is I'm not going to hide that, um, this is sort of just a grab bag of miscellaneous stuff you should know about neural networks and training neural networks. Um, like, I think, you know we dream of a future of artificial intelligence where our machines are really intelligent and you can just say to them this is the data and this is my problem, go and train me a model and it might work. Um, and in some future world, that may be [NOISE] that comes along. It's something that's certainly being actively researched at the moment under the topic of Auto ML. I guess the question is whether it turns out that Auto ML was a scalable solution or the climate change consequences of Auto ML techniques are sufficiently bad that someone actually decides that these much lower power, um, neural systems might actually be better still for doing some parts of the problem. But anyway, either way we're not really there yet. And the fact of the matter is, when you're training neural networks, there's just a whole bunch of stuff you have to know about initialization and nonlinearities and learning rates and so on. And, you know, when I taught this class last time I somehow thought that people would pick this up by osmosis. That if we gave starter, cut code to people and now start a code we initialized how matrices and we set our learning rates, that by osmosis people would understand that's what you have to do and do it. Um, it didn't really sort of teach in class the practical tips and tricks enough, but it was perfectly obvious that when we got to final project time that at least for quite a few people, osmosis hadn't worked. Um, so this time, [LAUGHTER] I'm at least wanting to spend a few minutes on that and at least point out some other things that are important. And, I mean just in general, you know the reality of 2018, deep learning, no, wait, it's 2019 now, 2019, um, deep learning, is deep learning is still kind of a craft. There's quite a bit you have to know of techniques of doing things that lead neural net training to work successfully as opposed to your models failing to work successfully. Okay. One final announcement and I go in to it. Um, so, we've sort of been doing some further working on Office, our placement and I guess there are sort of multiple issues which include the opportunities for local ICPD students without Stanford IDs. We have to, um, get, um, to office hours. So for the Thursday night office hour, um, that's after this class, if you'd like to go and talk about, um, the second homework, um, the Thursday night office hour is going to be in Thorton- Thornton 110. Um, now I didn't know where Thornton was. It made more sense to me when I translated that as that's the old terman annex, but that's probably just showing my age since probably none of you remember when they used to be a building called Terman. So that probably doesn't help you either. Um, but you know, if you're heading, right, I don't know which direction we're facing. If you're heading that way I guess and if you know where the Papua New Guinea Sculpture Garden is, um, the, the sort of open grassy area before you get to the Papua New Guinea Sculpture Garden, that's where Terman used to be and the building that still stands in there is Thornton. Um, Thornton 110 um tonight. I think it starts at 6:30, right? 6:30 to nine. Okay. Right. So, let me just finish off where we were last time. So remember we had this window of five words and then we're putting it through a neural net layer of C equals WX plus B, non-linearity of H equals F of X, and then we're, um, going to just get a score as to whether this has in its center [NOISE] named entity like Paris which is sort of taking this dot product of a vector times the hidden layer. So this was our model, and then we are wanting to work out partial derivatives of S with respect to all of our variables and we did various of the cases, but one we hadn't yet done is the weights, and the weight through all of this neural net layer here. Okay. So, chain rule, um, the partial of ds dw is DS times HD, um, dHDZ times DZ, DW. And well, if you remember last time, we had sort of done some computation of what those first two, um, partial derivatives were. And we could say that we could just call those delta which is our error signal coming from above. And that concept of having an error signal coming from above is something I'll get back to in the main part of the lecture and a sort of a central notion. But the bit we hadn't dealt with is this dz, dw and we started to look at that and I made the argument, um, based on our shape convention that the shape of that should be the same shape as our W matrix. So it should be, um, same in times M shape as this W matrix. So we want to work out the partial of Z by W which is the same as this, um, [NOISE] dwx plus b, dw. And so we want to work out what that derivative is. Um, and if that's not obvious, one way to think about it is to go back to this elements of the matrix and actually first off work it out element-wise and think out what it should be, and then once you've thought out what it should be, um, to rewrite it back in matrix form to give the compact answer. So what we have is we have the inputs here and a biased term and we're going to do the matrix multiply it this vector to produce these. And if you think about what's happening there, so we've got this matrix of weights and for a particular weight, a weight is first index is going to correspond to a position in the hidden layer and its second index is going to correspond to a position in the input vector. And one weight in the matrix ends up being part of what's used to compute one element of the hidden layer. So, the one element of the hidden layer you're taking, um, a row of the matrix and you're multiplying it by the components of this vector so they sum together when the bias is added on but one element of the matrix is sort of only being used in the computation between one element of the, um, important one element of the hidden vector. Okay. So, well, that means, um, if we're thinking about what's the partial derivative with respect to WIJ, well, it's only contributing to ZI and it's only, it's only doing anything with XJ. So, that we end up with, we're getting the partial with respect to WIJ, we can work that out with respect to, just to respect to ZI. And when we're going to look at this multiplication here, what we're ending up is this sort of sum of terms WIK times Xk where there's sort of weights in that row of the matrix going across the positions of the vector. So the only position in which WIJ is used is multiplying, um, by XJ. And at that point, what we have in terms of sort of, in our basic one variable doing a differentiation, this is just like we have 3x, um, and we say what's the derivative of 3x? Actually X is confusing, so I shouldn't say that. Is like we have three W and what's the derivative of three W with respect to W? It's three, right? So, that we've have a term here which is what would have been W, will be WIJ times XJ, and its derivative with respect to WIJ is just XJ. Does that makes sense? Everyone believe it? [NOISE] Fingers crossed. Okay. Um, so, so for one element of this matrix, we're just getting out XJ. And at that point, um, we say, um, well of course we want to know what the Jacobian is for the full matrix W. Well, if you start thinking about it, this argument applies to every cell. So, that for every, um, cell of, um, the Jacobian for W, um, it's going to be XJ. So, that means, um, we're just going to be able to make use of that in calculating our Jacobian. So, the derivative for a single WIJ is delta IXJ and that's true for all cells. So we wanted to have a matrix for our Jacobian which has delta I, um, XJ in every cell of it. And the way we can create that is by using an outer products. So, if we have a row vector of the deltas, the error signals from above and a column, right, I said that wrong, sorry. If we have a column of the delta error signals from above and we have a row of X transfers vectors, um, when we multiply those together we get the outer product and we get delta IXJ in each cell and that is our Jacobian answer, um, for working out, um, the delta S delta W that we started off with at the beginning. Okay. And this, um, and we get this form where it's a multiplication of an error signal from above and our computed local gradient signal. And that's the pattern that we're going to see over and over again and that will exploit and our computation graphs. Okay, all good? Okay. Um, so, here's just, um, here's homework two. You're meant to do some of this stuff. Um, here are just over a couple of collected tips, um, which I hope will help. I mean keeping here track of your variables and their dimensionality is really useful because we just can work out what the dimensionality of things should be. You're often kind of halfway there. I mean basically what you're doing is sort of applying the chain rule over and over again. It always looks like this. Um, but doing it in this sort of matrix calculus sense of the chain rule. Um, in the homework you have to do a softmax, which we haven't done in class. Um, something that I think you'll find useful, if you want to break apart the softmax is to consider two cases. One, the case is to when you're working it out for the correct class. And then, the other case is for all the other incorrect classes. Um, yeah. Um, in the the little derivation, I did before, I said well, let's work out an element-wise partial derivative because that should give me some sense of what's going on, what the answer is. I think that can be a really good thing to do if you're getting confused by matrix calculus. And I sort of, um, slightly skipped past another slide. Last time that was talking about the shape convention that I talked about it for a moment that for the homeworks you can work out your answer however you want, you can work it out in terms of; you know numerator ordered Jacobians, if that seems best to you. But we'd like you to give the final answer to your assignment questions following the shape convention. So, that the derivative should be shaped in a vector matrix in the same way as the variable, with respect to which you're working out your derivatives. Okay. Um, the last little bit for finishing up this example from last time, I want to say a little bit about, is what happens with words. And one answer is nothing different. But another answer is they are a little bit of a special case here because, you know, really we have a matrix of word vectors, right? We have a vector for each word. And so then you can think of that as sort of this matrix of word vectors, which row has a different word. But we're not actually kind of connecting up that matrix directly to our classifier system. Instead of that, what we're connect connecting up to the classifier system is this window and the window will have it in at five words. Most commonly they're different words. But you know occasionally the same word might appear, um, in two positions in that window. And so, we can nevertheless do exactly the same thing and continue our gradients down and say okay, um, let's work out, um, the gradients of this word window vector. And if, um, these are of dimension D we'll have this sort of 5-D, um, vector. But, you know then what do we do about it, and the answer of what we do about it. Is we can just sort of split this window vector into five pieces and say aha, we have five updates to word vectors. We're just going to go off and apply them to the word Vector Matrix. Um, and you know if we if the same word occurs twice, um, in that window we literally apply both of the updates. So, it gets updated twice or maybe actually you want to sum them first and then do the update once but yeah, that's a technical issue. Um, so what that actually means is that we're extremely sparsely updating the word Vector Matrix because most of the word Vector Matrix will be unchanged and just a few rows of that, um, will be being updated. And if- um, soon we're going to be here doing stuff with PyTorch Um, and if you poke around Pytorch it even has some special stuff. Um, look for things like Sparse SGD for meaning that you're sort of doing a very sparse updating like that. Um, but there's one other sort of interesting thing that you should know about. For a lot of um, things that you do is just what actually happens if we push down these gradients into our word vectors. Well, the idea is no, if we do that would be just like all other neural net learning, that we will sort of in principle say move the word vectors around in such a way as they're more useful in helping determine named entity classification in this case because that was our motivating example. Um, so you know it might for example learn that the word in is a very good indicator of a named entity fall or sorry the place name following. So, after n you often get London, Paris et cetera. Right, so it's sort of got a special behavior that other prepositions don't as being a good location indicator. And so, it could sort of um, move it's location around and say here are words that are good location indicators and therefore help our classifier work even better. So, in principle that's good and it's a good thing to do, to update word vectors to help you perform better on a supervised task such as this Named Entity Recognition classification. But, there's a catch which is that it doesn't always work actually. And so, why doesn't it always work? Well, suppose that we're training a classifier. Um, you know it could be the one I just did or a softmax or logistic regression. And we wanting to classify um, movie reviews sentiment for positive or negative. Well, you know if we have trained our word vectors, we've got some word vector space and maybe in the word vector space, um, TV, telly and television are all very close together because they mean basically the same thing. So, that's great, our word vectors are good. But, well suppose it was the case, that in our training data for our classifier. So, this is our training data for movie sentiment review. We had the word TV and telly but we didn't have the word television. Well, then what's going to happen, is well while we try and train our sentiment classifier, if we push gradient back down into the word vectors what's likely to happen is that it will move around the word vectors of the words we saw in the training data. But, necessarily television's not moving, right? Because we're only pushing gradient down to words that are in our training data. So, this word goes nowhere, so it just stays where it was all along. So, if the result of our training is words get moved around. So, here a good words for indicating negative sentiment, um, will actually if at test time, when we're running our model, if we evaluate on a sentence with television in it, it's actually going to give the wrong answer. Whereas if we haven't changed the word vectors at all and had just left them where our word embedding learning system put them. Then it would have said television, that's a word that means about the same as TV or telly. I should treat it the same and my sentiment classifier and it would actually do a better job. So, it's sort of two-sided whether you gain by training word vectors. And so, this is a summary um, that says; that it's two sided and practically what you should do. So, the first choice is G is a good idea to use pre-trained word vectors like the word2vec vectors that you used in assignment one or using the training methods that you're doing right now for homework two. And the answer that is almost always yes. And the reason for that is this word vector training methods are extremely easy to run on billions of words of texts. So, we you know train these models like [inaudible] on billions or tens of billions of words. And it's easy to do that for two reasons. Firstly, because the training algorithms are very simple, right? That um, the word2vec training algorithms skip grams very simple algorithm. Secondly; because we don't need any expensive resources, all or we need as a big pile of text documents and we can run it on them. So, really easy to run it on, you know five or 50 billion words. Whereas, you know, we can't do that for most of the classifiers that we want to build because if it's something I sentiment classifier or a named entity recognizer, we need labeled training data to train our classifier and then we ask someone how many words have labeled training data, do you have for named entity recognition and they give this back a number like 300,000 words or one million words, right. It's orders a magnitude smaller. Okay. Um. So, therefore, we can gain using pre-trained word vectors, because they know about all the words that aren't now supervised, classifies training data. And they also know much more about the words that actually are in the training data, but only rarely. So, the exception to that is, if you have hundreds of millions of words of data, then you can start off with random word vectors and go from there. And so, a case where this is actually commonly done, is for machine translation, which we do later in the class. It's relatively easy for large languages to get hundreds of millions of words of translated text. If you wanted to build something, like a German- English or Chinese-English machine translation system. Not hard to get 150 million words of translated texts. And so, that's sort of sufficiently much data, that people commonly just start with word vectors, um, being randomly initialized and start training, um, their translation system. Okay. So then the second question is, okay. I'm using pre-trained word vectors. Um, when I train my supervised classifier, should I push gradients down into the word vectors and up, and update them? Which is often referred to as fine tuning the word vectors, um, or should I not, should I just sort of throw away those gradients and not push them down into the word vectors? And you know, the answer to that is it depends, and it just depends on the size. So, if you only have a small training data set, um, typically, it's best to just treat the pre-trained word vectors as fixed, um, and not do any updating of them at all. If you have a large data set, then you can normally gain by doing fine tuning of the word vectors. And of course, the answer here, is what counts as large. Um, you know, if certainly, if you're down in the regime of 100 thousand words, a couple of hundred thousand words, you're small. If you're starting to be over a million words, then maybe you're large. But you know, on practice, people do it both ways and see which number is higher, and that's what they stick with. Um. Yes. Um, then, the sort of, there's the sort of point here that is just worth underlying is " Yes", so on principle, we can back-propagate this gradient to every variable in our model. Um, it's actually a theorem that we can arbitrarily decide to throw any subset of those gradients away, and we are still improving the log-likelihood of our model, all right? It kind of can't be inconsistent. You can just sort of pick some subset and say only train those 37 and throw away all the rest. And the algorithm will still improve, um, the log-likelihood of the model. Perhaps not by as much as if you trained the rest of the variables, as well, um, but yes, it can't actually do any harm not to train anything. Um, that's one of the reasons why often people don't notice bugs in their code, as well. It is because if your code is kind of broken and only half of the variables are being updated, it will still seem to be training something and improving. Um. It's just not doing as well as it could be doing, if you've coded correctly. Okay. Um, so, at this point, um, that's sort of, um, almost shown you back propagation, right? So, back-propagation is really taking derivatives with a generalized chain rule, with the one further trick which we sort of represented with that delta, which is G. You want to be, um, clever in doing this, so, you minimize computation by reusing shared stuff. Um, but now what I want to move on is to sort of look at how we can do that much more systematically, which is this idea. We have a computation graph and we're going to run a back-propagation algorithm through the computation graph. So, this is kind of like an abstracts syntax tree, expression tree that you might see in a compiler's class, or something like that, right? So, when we have an arithmetic expression of the kind that we're going to compute, we can make this tipped over on its side tree representation. So, we've got the X and W variables, we're going to multiply them. There's the B variable, we're going to add it to the previous partial result. We're going to stick it through our non-linearity F and then we're going to multiply it by U. And that was the computation, that we're doing in our neural network. So, um the source nodes or inputs, the interior nodes of this tree are operations. And then we've got these edges that pass along the results of our computation. And so, this is the computation graph for precisely the example I've been doing for the last lecture [NOISE]. Okay, so there are two things that we want to be able to do. The first one is, we want to be able to start with these variables and do this computation, and calculate what S is. That's the part that's dead simple, that's referred to as forward propagation. So, forward propagation is just expression evaluation, as you do in any any programming in language interpreter. Um, that's not hard at all. Um, but the difference here is, "Hey, we want to do a learning algorithm" so we're going to do the opposite of that, as well. What we want to be able to do is also backward propagation, or back-propagation or just back-prop, it's commonly called, which is we want to be able to go, um, from the final part. The final part here. And then at each step, we want to be calculating these partial derivatives and passing them back through the graph. And so, this was sort of the notion before that we had an error signal, right? So, we're starting from up here, we've calculated a partial of S by Z, which is this with respect to that. And so, that's sort of our calculated error signal, up to here, and then we want to pass that further back, to start, um, computing, um, um, gradients further back. Right? And we started off, um, right here, with the partial of S by S. What's the partial of S by S going to be? One. Okay, yes. So, the rate at which S changes is the rate at which S changes. So, we just start off with one, and then we want to work out how this gradient changes as we go along. Um, so what we're doing here is when we're working out things for one node, that a node is going to have passed in towards it upstream gradient, which is its error signal. So, that's the partial of our final, f- final result, which was our loss, um, by um, the va- variable was the output of these computation nodes. So, that's the partial of S I H, here. And then, we did some operation here. Here's the non-linearity, but it might be something else. And so what we want to then work out is a downstream gradient, which is the partial of S by Z, which was the input to this function. And well then the question is, how do we do that? And the answer to that is, we use the chain rule, of course, right? So, at, we have a concept of a local gradients. So, here's H as the output, um, Z is the input. So, this function here, this is our non-linearity, right? So, this is whatever we're using as our non-linearity, like a logistic or T and H. We calculate H in terms of Z, and we can work out the partial of H by Z. So, that's our local gradient. And so then, if we have both the upstream gradient and the local gradient. We can then work out the downstream gradient because we know the partial of S by Z is going to be DSDH times, um, DHDZ. And so, then we'll be able to pass down the downstream gradient to the next node. Okay. So our basic rule, which is just the chain rule written in different terms is downstream gradient equals upstream gradient times local gradient. Um, easy as that,um, okay. So, this was um, the very simplest case where we have a node with one input and one output. So, that's a function um, like our logistic function. But, we also want to have things work out for general computation graphs. So, how are we going to do that? Well, the next case is, um, what about if we have multiple inputs? So, if we're calculating something like Z equals W times X. Um, where actually yes Z and X are themselves vectors and W um, is a matrix, but we're treating X as an input and W as an input, and Z as our output, right? We kind of group vectors and matrices together. Well, if you have multiple inputs, you then end up with multiple local gradients. So, you can work out um, the partial of Z with respect to X, or the partial of Z with respect to W. And so, you essentially you take the upstream gradient, you multiply it by each of the local gradients, and you pass it down the respective path, and we calculate these different downstream gradients to pass along. Is that making sense? Yeah. Okay. How chug. Okay. So, let's sort of look in an example of this and then we'll see one other case. So here's the little baby example. This isn't kind of really looking like a neural net, but we've got three inputs x, y, and z. And x and y get added together and y and z you get maxed. And then we take the results of those two operations and we multiply them together. So overall what we're calculating is x plus y times the max of y plus z. But, you know, we have here a general technique and we can apply it in any cases. Okay, so if we wanted to have this graph and we want to run it forward, well, we need to know the values of x, y, and z. So, for my example x equals one y equals two z equals zero. Um, so, we take the values of those variables and push them onto the calculations for the forward arrows. And then well the first thing we do is add and the result of that is three. And so we can put that onto the arrow. That's the output of add. Max it's two as the output of the value of add times is six. And so the forward pass we have evaluated the expression. Its value is six. That wasn't hard. Okay. So then the next step is we then want to run back-propagation to work out gradients. Um, and so we sort of want to know how to sort of, um work out these local gradients. So a is our right a is the result of sum. So here's a as the result of sum. So a equals x plus y. So if you're taking da dx that's just one and d a d y is also one that makes sense. Um, the max is slightly trickier because where there's some slopes and gradient for the max depends on which one's bigger. So, if y is bigger than z d- delta, the partial of b by z, plus partial b by y is one otherwise it's 0 and conversely for the partial of b by z. So that one's a little bit dependent. And then we do the multiplication, um, case at the end, um, and work out its partials with respect to a and b. And, um, since that's a and b which has the values two and three. If you're taking the partial of f by a it equals b which is two and vice versa. Okay. So that means we can work out the local gradients at each node. And so then we want to use those to calculate our gradients backwards and the back-propagation paths. So we start at the top. The partial of f with respect to F is one. Because if you move if you know by a tenth then you've moved the f by a tenth. So that's a cancels out as one. Okay. So then we want to pass backwards. So, the first thing that we have is this sort of multiply node. And so we worked- we know its local gradients that partial of f by a is two, and the partial of f by b is three. And so we get those values. So formally we're taking the local gradients multiplying them by the upstream gradients and getting our three and two. And notice the fact that so effectively what happens is the values on the two arcs swaps. Um, and then we sort of continue back. Okay. There's a max node. So our upstream gradient is now three and then we want to multiply by the local gradient. And since the max of these two as two has a slope of one on this side. So you get three, there's no gradient on this side and we get zero. And then we do the similar calculation on the other side where we have local gradients of one. And so both of them come out of two And then the one other thing to do is we notice, well, wait a minute. There are two arcs that started from the y both of which we've backed complicated some gradient on. And so what do we do about that. Um, what we do about that is we sum. So, the partial of f by x is to the partial of f by z is 0 that the partial of f by y is the sum of the two and five, right? And so this isn't complete voodoo. This is something that should make sense in terms of what gradients are, right? So, that what we're saying, is what we're calculating, is if you wiggle x a little bit how big an effect does that have on the outcome of the whole thing? And so, you know, we should be able to work this out. So, our x started offers one but let's suppose we wiggle it up a bit and make it 1.1 well according to this output should change by about 0.2, it should be magnified by two. And we should be able to work that out, right? So it's then 1.1 plus two, so that's then 3.1. And then we've got the two here that multiplies by it and it's 6.2. And lo and behold it went up by 0.2, right? So that seems correct. And if we try and do the same for, well, let's do the z. It's easy. So if we wiggle the z which had a value of zero by 0.1. This is 0.1. When we max if this is still two and so a calculated value doesn't change, it's still six. So the gradient here is zero. Wiggling this does nothing. And then the final one is y. So, it's starting off value as two. So, if we wiggle it a little and make it 2.1, our claim is that the results are change by about 0.5. It should be multiplied by five times. So, if we make this 2.1 we then have 2.1 plus one and b 3.1. When we get the max here would also be 2.1. And so we'd have 2.1 times 3.1. And that's too hard arithmetic for me to do in my head. But if we take 2.1 times 3.1 it comes out to 6.51. So, basically it's gone up by half. We don't expect the answers to be exact of course, right? Because you know that's not the way calculus works, right? [NOISE]. Where that it's showing that we're getting the gradients right. Okay. So this actually works. So, what are the techniques that we need to know? Um, so we've sort of already seen them all. So, you know, we discussed when there are multiple incoming arcs, how he saw workout the different local derivatives. The main other case that we need to know is if, um, in the function computation there's a branch outward the resultant something is used in multiple places. And so this was like the case here. I mean, here this was an initial variable, but you know, it could have been computed by something further back. So, if this thing is used in multiple places and you have the computation going out in different ways. It's just this simple rule that when you do backpropagation backwards you sum the gradients that you get from the different output branches. Okay. So, if a equals X plus Y and while that's the one we showed you before that were doing this some operation to work out the total partial of f by y. Okay. And if you sort of think about it just a little bit more, there's sort of these obvious patterns, um, which we saw in this very simple example. So, if you've got a plus that really the upstream gradient is going to be sort of heading down every one of these grant branches when you have multiple branches are things being summed. Now, in this case, it just as copied unchanged but that's because our computation was x plus y. You know, it could be more complicated, but we're passing it down down each of those branches. So plus distributes upstream gradient. When you have a max that's kind of like a routing operation, because max is going to be sending the gradient to in the direction that's the max, and other things are going to get no gradient being passed down to them. Um, and then when you have, um, a multiplication this has this kind of fun effect that what you do is switch the gradient, right? And so this reflects the fact that when you have u times v regardless of whether u and v are vectors or just, um, scalars that the derivative of the result with respect to u is v and the derivative of those spot- result with respect to v is u. And so, the, um, gradient signal is the flip, um, of the tw- two numbers on the different sides. Okay. Um, so this is sort of most of how we have these computation graphs and we can work out backpropagation backwards in them. There's sort of one more part of this to do, um, which is to say g, we want to do this eff- efficiently. So, there's a bad way to do this which is to say, "Oh well, we wanted to calculate the partial of this by b and so we can calculate that partial." Which was essentially what I was doing on last time slides. We say, "Um, partial of s by b equals the partial of s by h, times the partial of h by z, times the partial of z by b, and we have all of those partials. We work them all out and multiply them together and then someone says, um, what's the partial of s by w? And we say, huh, that's the chain rule again, I'll do it all again. It's the partial of s by, um, h times the partial of h by z, times the partial of and z by x, no, no, right, ah, lost it. But you do big long list of them and you calculate all again. That's not what we want to do. Instead we want to say, "Oh, look there's this shared stuff. There's this error signal coming from above." And we can work out the error signal the upstream gradient for this node. We can use it to calculate the upstream gradient for this node. We can use this to calculate the upstream gradient for this node and then, using the local gradients of which there are two calculated this node we can then calculate this one and that one. Um, and then, from here having knowing this upstream gradient, we can use the local gradients at this node to compute this one and that one. And so, we're sort of doing this efficient computer science like computation, um, where we don't do any repeated work. That makes sense? Yeah. Okay. Um, and so if that is, um, the whole of backprop. So, um, here's sort of a slightly sketchy um graph which is sort of just re-capitulating this thing. So, if you have any computation that you want to perform, um, well, the hope is that you can sort your nodes into what's called a topological sort which means that things that are arguments, variables that are arguments are sorted before variables that are results that depend on that argument. You know, providing you have something there's an a cyclic graph, you'll be able to do that. If you have a cyclic graph, you're in trouble. Um, well, I'd be there actually techniques people use to roll out those graphs but I'm not gonna go into that now. So, we've sorted the nodes which is kind of loosely represented here from bottom to top in a topological sort area, sort. Okay. So then, for the forward prop we sort of go through the nodes in the topological sort order and we if it's a variable we just set its value to what it's favorite val- variable value is. If it's computed from other variables their values must have been set already because there earlier in the topological sort, um, and then we compute the value of those nodes according to their predecessors, and we pass it up and work out the final output, the loss function of our neural network and that is our forward pass. Okay. So then, after that we do our backward pass and so for the backward pass we initialize the output gradient with one. The top thing is always one, the partial of z with respect to z. And then, we now sort of go through the nodes in reverse topological sort. And so therefore, each of them will all ready- anything that's, ah, anything that's, uh, language is complex. Anything that's above that. Anything that we calculated based on it in terms of, ah, forward pass will already have had calculated it's, um, it's gradient as a product of upstream gradient times local gradient and then we can use that, um, to compute the next thing down. Um, and so basically the ov- the overall role is for any node you work out its set of successors, the things that are above it that it, that depend on it and then you say, "Okay, the partial of z with respect to x is simply the sum over the set of successors of the local gradient that you calculated the node times the upstream gradient of that node." Um, and in the examples that I gave before there was never, never multiple upstream gradients. But if you imagine a, a general big graph there could actually be so different upstream gradients that are being used in- for the various successors. So, we apply that backwards and then we've worked out in backpropagation, um, the gradient of every, the gradient of the final result z with respect to every node in our graph. Um, and the thing to notice about this is, if you're doing it right and efficiently, the bigger o order of complexity of doing backpropagation is exactly the same as doing forward propagation i.e expression evaluation. So, it's not some super expensive complex procedure that you can imagine doing and scaling up. Um, you're actually in exactly the same complexity order. Okay. Um, so as [inaudible] entered it here this procedure, you could just think of something that you're running on an arbitrary graph and calculating this forward pass and the backwards pass. I mean, almost without exception that the kind of neural nets that we actually use have a regular layer like structure and that's then precisely why it makes to- sense to work out these gradients in terms of, um, vectors matrices and Jacobian's as the kind we were before. Okay. Um, so since we have this sort of really nice algorithm now, um, this sort of means that, um, we can do this just computationally and so we don't have to think or know how to do math. Um, and we can just have our computers do all of this with this. Um, so that using this graph structure, um, we can just automatically work out how to apply, um, backprop. And there are sort of two cases of this, right? So, if what was calculated at each node, um, is given as a symbolic expression, we could actually have our computer work out for us what the derivative of that symbolic expression is. So, it could actually calculate, um, the gradient of that node and that's referred to as often as automatic differentiation. So, this is kind of like Mathematica Wolfram Alpha. You know how you can do your math homework on it? You just type in your expression, say what's a derivative and it gives it back to you right? Um, it's working doing symbolic computation and working out the derivative for you. Um, so that- so that method could be used to work out the local gradients and then we can use the graph structure and now rule upstream gradient times local gradient gives downstream gradient, i.e the chain rule, um, to then propagate it through the graph and do the whole backward pass completely automatically. And so that sounds, um, great. Um, slight disappointment, um, current deep learning frameworks don't quite give you that. Um, there was actually a famous framework that attempted to give you that. So the Theano Framework that was developed at the University of Montreal, um, those they've now abandoned in the modern era of large technology corporation, deep learning frameworks. Theano did precisely that. It did the full thing of automatic differentiation, um, for reasons that we could either think of good or bad, current deep learning frameworks like TensorFlow or PyTorch actually do a little bit less than that. So what they do is, say, well for an indiv- for the computations at an individual node, you have to do the calculus for yourself. Um, for this individual node, you have to write the forward propagation, say, you know, return X plus Y and you have to write the backward propagation, saying the local gradients, uh, one and one to the two inputs X and Y, um, but providing you or someone else has written out the forward and backward local step at this node, then TensorFlow or PyTorch does all the rest of it for you and runs the backpropagation algorithm. [NOISE] Um, and then, you know, effectively, that sort of saves you having to have a big symbolic computation engine, because somewhat, the person coding the node computations is writing a bit of code as you might normally imagine doing it whether in, you know, C or Pascal, of saying returning X plus Y, and, you know, local Gradient return one. Right? And- and you don't actually have to have a whole symbolic computation engine. Okay. So that means the overall picture looks like this. Right? So um, schematically, we have a computation graph, um, and to calculate the forward computation, um, we, um, so- sort of put inputs into our computation graph where there's sort of X and Y variables, and then we run through the nodes in topologically sorted order, and for each node we calculate its forward and necessarily the things that depends on and have already been computed and we just do expression evaluation forward. And then we return, um, the final gate in the graph, which is our loss function, or objective function. But then, also we have the backward pass, and for the backward pass, we go in the nodes in reversed topological, um, resorted order, and for each of those nodes, we've return their backward value, and for their top node, we return backward value of one, and that will then give us our gradients. And so that means, um, for any node, any piece of computation that we perform, we need to write a little bit of code that um says what it's doing on the forward pass and what it's doing on the backward pass. So on the forward pass, um, this is our multiplication, so we're just saying return X times Y. So that's pretty easy. That's what you're used to doing. But while we also need to do the backward passes, local gradients of return what is the partial of L with respect to Z and with respect to X. And well, to do that, we have to do a little bit more work. So we have to do a little bit more work, first of all, in the forward pass. So, in the forward pass, we have to remember to sort of stuff away in some variables what values we computed in the for- what- what values were given to us in the forward pass, or else we won't be able to calculate the backward pass. So we store away the values of X and Y, um, and so then, when we're doing the backward pass, we are passed into us the upstream Gradient, the error signal, and now we just do calculate, um, upstream Gradient times local Gradient- upstream Gradient times local Gradient, and we return backwards, um, those um downstream Gradients. And so providing we do that for all the nodes of our graph, um, we then have something that, um, the system can learn for us as a deep learning system. And so what that means in practice, um, is that, you know, any of these deep learning frameworks come with a whole box of tools that says, um, here is a fully connected forward layer, here is a sigmoid unit, here is other more complicated things we'll do later, like convolutions and recurrent layers. And to the extent that you are using one of those, somebody else has done this work for you. Right? That they've um defined, um, nodes or a layer of nodes that have forward and backward already written for- for them. And to the extent that that's true, um, that means that making neural nets is heaps of fun. It's just like lego. Right? You just stick these layers together and say, "God, I have to learn on some data and train it." You know, it's so easy that my high school student is building these things. Right? Um, you don't have to understand much really, um, but, you know, to the extent that you actually want to do some original research and think, "I've got this really cool idea of how to do things differently. I'm going to define my own kind of different computation." Well, then you have to do this and define your class, and as well as, sort of saying, how to compute the forward value, you will have to pull out your copy of Wolfram Alpha and work out what the derivatives are, um, and put that into the backward pass. Um, yeah. Okay. So here's just one little more note on that. Um, you know, in the early days of deep learning, say prior to 2014, what we always used to state to everybody very sternly is, "You should check all your Gradients, by doing numeric Gradient checks. It's really really important." Um, and so what that meant was, well, you know, if you want to know whether you have coded your backward pass right, an easy way to check, um, whether you've coded it right, is to do this numeric Gradient where you're sort of estimating the slope by wiggling it a bit, and wiggling the input a bit, and seeing what effect it has. So I'm working out the value of the function the F of X plus H, for H very small like E to the minus four, and then F of X minus H, um, and then dividing by 2H, and I'm saying well, what is the slope at this point, and I'm getting a numeric estimate of the Gradient with respect, um, to my variable X here. Um, so this is what you will have seen in high school when you did the sort of first um estimates of Gradients, where you sort of worked out F of X plus H divided by H and you're doing rise over run and got a point estimate of the Gradient. Um, exactly the same thing, except for the fact, in this case, rather than doing it one sided like that, we are doing it two-sided. It turns out that if you actually wanna do this, two-sided is asymptotically hugely [NOISE] better, and so you're always better off doing two-sided Gradient checks rather than one-sided Gradient checks. Um, so since you saw that- since it's hard to implement this wrong, this is a good way to check that your Gradients are correct if you've defined them yourselves. Um, as a technique to use it [NOISE] for anything, it's completely, completely hopeless, because we're thinking of doing this over our deep learning model for a fully connected layer. What this means [NOISE] is that, if you've got this sort of like a W matrix of N by M and you want to, um, calculate um your partial derivatives to check if they're correct, it means that you have to do this for every element of the matrix. So you have to calculate the eventual loss, first jiggling W11, then jiggling W12, then jiggling one- W13, 14 et cetera. So you have- in the complex network, you'll end up literally doing millions of function evaluations to check the Gradients at one point in time. So, you know, it's, it's not like what I advertised for backprop when I said it's just as efficient as calculating, um, the forward value. Doing this is forward value computation time multiplied by number of parameters in our model, which is often huge for deep learning networks. So this is something that you only want to have inside- if statements that you could turn off. So you could just sort of run it to check that your code isn't bre- um, debuggy. Um, you know, in honesty, this is just much less needed now because, you know, by and large you can plug together your components and layers and PyTorch, um, and other people wrote the code right and it will work. Um, so you probably don't need to do this all the time. But it is still a useful thing to look at and to know about if things um, are going wrong. Yeah. Okay, so we- we've now mastered the core technology of neural nets. We saw now well, basically everything we need to know about neural nets, and I sort of just, um, summarized it there. Um, just to sort of emphasize um once more. Um, you know, I think some people think, why do we even lear- need to learn all this stuff about gradients?' And there's a sense in which it's [inaudible] really, because these modern deep learning frameworks will compute all of the gradients for you. You know, we make you suffer on homework two, but in homework three, you can have your gradients computed for you. But, you know, I- so you know it's sort of just, like, well, why should you take a c- a class on compilers, right? That there's actually something useful in understanding what goes on under the hood, even though most of the time, we're just perfectly happy to let the C compiler do its thing, without being experts on X86 assembler every day of the wa- week. But, you know, there is more to it than that. Um, you know, because even though backpropagation is great, once you're building complex models, backpropagation doesn't always work as you would expect it to. Perfectly is maybe the wrong word, because you know mathematically it's perfect. Um, but it might not be achieving what you're wanting it to. And well, if you want to sort of then debug an improved models, it's kind of crucial to understand what's going on. So, there's a nice medium piece by Andre Karpathy, of yes you should understand backprop um that's on the syllabus page, um, that talks about this and indeed um, um, week after next, Abby is actually going to lecture about recurrent neural networks, and you know one of the places, um, where you can easily fail um, and doing backpropagation turns up there, um, is a good example. Okay. So anyone have any questions about backpropagation and computation graphs? Okay. If not the remainder of the time is, um, the grab bag of things that you really should know about, if you're going to be doing deep learning. And so, yeah, this is just itsy-bitsy and, but let me say them. Um, so up until now, when we've had um loss functions, and we've been maximizing the likelihood of our data, and stuff like that, we've sort of just had this part here which is the likelihood of our data, and we've worked to maximize it. Um, however, um, in practice that works badly usually, and we need to do something else which is regularize our models. And if you've done the Machine Learning class, or something like that you will have seen regularization. And there are various techniques to do regularization, but, um, compared to anything else, regularization is even more important, um, for deep learning models, right? So, um, the general idea is if you have a lot of parameters in your model, those parameters can just essentially memorize what's in the data that you trained at. And so they're very good at predicting the answers. The model becomes very good at predicting the answers to the data you trained it on, but the model may become poor at working in the real world, and different examples. And somehow we want to stop that. And this problem is especially bad for deep learning models, because typically deep learning models have vast, vast numbers of parameters. So in the good old days when statisticians ruled the show, they told people that it was completely ridiculous to have a number of parameters that approached your number of training examples. You know, you should never have more parameters in your model, than one-tenth of the number of your training examples. So it's the kind of um rules of thumb you are told, so that you had lots of examples with which to estimate every parameter. Um, that's just not true with deep learning models, is just really common that we trained deep learning models that have 10 times as many parameters, as we have training examples. Um, but miraculously it works. In fact it works brilliantly. Those highly over parameterized models, and this one of the big secret sources of why deep learning has been so brilliant, but it only works if we regularize the model. So, if you train a model without sufficient regularization, what you find is that you're training it and working out your loss on the training data, and the model keeps on getting better, and better, and better, and better. Um, necessarily, alg- algorithm has to improve loss on the training data. So the worst thing that could happen, is that the graph could become absolutely fa- flat. What you'll find is with most models that we train, they have so many parameters that this will just keep on going down, until the loss is sort of approaching the numerical precision of zero, if you leave it training for long enough. It just learns the correct answer for every example, beca- because effectively can memorize the examples. Okay, but if you then say, ''Let me test out this model on some different data.'' What you find is this red curve, that up until a certain point, um, that you are also building a model that's better at predicting on different data, but after some point this curve starts to curve up again. And ignore that bit where it seems to curve down again, that was a mistake in the drawing. Um, and so this is then referred to as over-fitting, that the- from here on the training model is just learning to memorize whatever was in the training data, but not in a way that later generalized to other examples. And so this is not what we want. We want to try and avoid over-fitting as much as possible, and there are various regularization techniques that we use for that. And simple starting one is this one here where we penalize the log-likelihood by saying, ''You're going to be penalized to the extent that you move parameters away from zero.'' So the default state of nature is all parameters are zeros, so they're ignored on computations. You can have parameters that have big values, but you'll pee penalized a bit four, and this is referred to as L-2 regularization. And, you know, that's sort of a starting point of something sensible you could do with regularization, but there's more to say later. And we'll talk in this sort of lecture before we discuss final projects of other clever regularization techniques at neural networks. Okay. Um, grab bag number two, vectorization is the term that you have here, um, but it's not only vectors. This is also matrixization, and higher dimensional matrices what are called tensors, in this field tensorization. Um, getting deep learning systems to run fast and efficiently is only possible if we vectorize things. Um, and what does that mean? What that means is, you know, the straightforward way to write a lot of code um, that you saw in your first CS class, is you say for I in range in um calculate random randi-1. Um, but when we want to be clever, um, people, um, that are doing things fast, um, we say rather than work out this W dot one word vector at a time, and do it in a four loop, we could instead put all of our word vectors into one matrix, and then do simply one matrix-matrix multiply of W by our word vector matrix. And even if you run your code on your laptop on a CPU, you will find out that if you do it the vectorized way, things will become hugely faster. So in this example, it became over an order of magnitude faster, when doing it with a vector- vectorized rather than, um, with a full loop. Um, and those gains are only compounded when we run code on a GPU, that you'll get no gains and speed of tall on a GPU, unless your code is vectorized. But if it is vectorized, then you can hope to have results, of oh, yeah, this runs 40 times faster, than it did on the CPU. Okay, um, yeah, so always try to use vectors and matrices not for loops. Um, of course it's useful when developing stuff to time your code, and find out what's slow. Um, okay. Point three. Um, okay, so we discussed this idea, um, last time, and the time before that after- after having the sort of affine layer, where we took, you know, go from X to WX, plus B. That's referred to as an affine layer, so we're doing this, um, multiplying a vector by a matrice- matrix, and adding um biases. We necessarily to have power and a deep network, um, have to have some form of non-linearity. And so, I just wanted to go through a bit of background on non-linearity is in what people use, and what to use. So, if you're sort of starting from the idea of what we know is logistic regression, um, what's commonly referred to as the sigmoid curve, or maybe more precisely is the logistic, um, function is this picture here. So something that's squashes any real number positive or negative into the range zero to one. It gives you a probability output. Um, these- this use of this, um, logistic function was really really common in early neural nets. If you go back to '80s, '90s neural nets, there were, um, sigmoid functions absolutely everywhere. Um, in more recent times, 90 percent of the time nobody uses this and they've been found to sort of actually work quite poorly. The only place these are used is when you actually want a value between zero and one is your output. So we'll talk later about how you have gating in networks, and so gating as a place where you want to have a probability between two things. And then you will use one of those, but you use some absolutely nowhere else. Um, here is the tanh curve. Um, so the formula for tanh, um, looks like a scary thing with thoughts of exponentials in it, and it doesn't really look much like a logistic curve whatsoever. Um, but if you um dig up your math textbook you can convince yourself that a tanh curve is actually exactly the same as the logistic curve apart from you multiply it by two, so it has a range of two rather than one, and you shift it down line. So, this is sort of just a re-scaled logistic. There's now symmetric between one and minus one, and the fact that some metric in the output actually helps a lot for putting into neural networks. Um. So, tanh's, are still reasonably widely used in quite a number of places um in um your networks. So, tanh should be a friend of yours and you should know about that. But you know, one of the bad things about using um transcendental functions like the sigmoid or tanh is, you know, they involve this expensive math operations um that slow you down. Like, it's sort of a nuisance to be kind of computing exponentials and tanh's in your computer, things are kind of slow. So people started um playing around with ways to make things faster and so someone came up with this idea like, maybe we could come up with a hard tanh, um where it's just sort of flat out here and then it has a linear slope and then it's flat at the top. You know, it sort of looks like a tanh but we just squared it off. Um, and while this is really cheap to compute right, you say, x less than minus one, return minus one, return plus one or just return the number. No complex transcendentals. The funny thing is, it turns out that this actually works pretty well. You might be scared and you might justifiably be scared because if you start thinking about gradients, once you're over here, there's no gradient, right? It's completely flat at zero. So, things go dead as soon as they're at one of the ends. So, it's sort of important to stay in this middle section at least for a while and then its just got a slope of one, right? It's a constant slope of one. But this is enough of a linearity that actually it works well in neural networks and you can train neural networks. So, that's sent the whole field in the opposite direction and people thought, oh, if that works, maybe we can make things even simpler. And that led to the now famous what's referred to [inaudible] as ReLU. So there is a mistake in my editing there, delete off hard tanh. That was in slides by mistake. [LAUGHTER] The ReLU unit, everyone calls it ReLU which stands for rectified linear unit. So, the Re-, the ReLU is essentially the simplest non-linearity you can have. So the ReLU is zero, slope zero as soon as you're in the negative regime and it's just a line slope one, when you're in the positive regime. I mean, when I first saw this, I mean, it's sort of blew my mind it could possibly work. Because it sort of, I guess, I was brought up on these sort of tanh's and sigmoids and the sorts of these arguments about the slope and you get these gradients and you can move around with the gradient. And how is it meant to work if half of this function just says output zero and no gradient and the other half is just this straight line. And in particular, when you're in the positive regime, this is just an identity function. And, you know, I sort of argued before that if you just compose linear transforms, you don't get any power but provided when this is the right-hand part of the regime. Since this is an identity function, that's exactly what we're doing. We're just composing linear transforms. So you- you sort of believe it just can't possibly work but it turns out that this works brilliantly. And this is now by far the default choice when people are building feed for deep networks. That people use ReLU non-linearities and they are very fast, they train very quickly and they perform very well. And so, effectively, you know, it is, it is simply just each u-, depending on the inputs, each unit is just either dead or it's passing things on as an identity function. But that's enough of lini-, non-linearity that you can do arbitrary function approximation still with a deep learning network. And people now make precisely the opposite argument which is, because this unit just has a slope of one over it's non-zero range, that means, the gradient is past spec very efficiently to the inputs and therefore the models train very efficiently whereas, when you are with these kind of curves, when you're over here, there's very little slope so your models might train very slowly. Okay. So, you know, for feed-forward network, try this before you try anything else. But there's sort of then been a sub literature that says, well, maybe that's too simple and we could do a bit better. And so that led to the leaky ReLU which said, "Maybe we should put a tiny bit of slope over here so it's not completely dead." So you can make it something like one, one 100th as the slope of this part. And then people had, well, let's build off that, maybe we could actually put another parameter into our neural network and we could have a parametric ReLU. So, there's some slope over here but we're also going to backpropagate into our non-linearity which has this extra alpha parameter, which is how ma- much slope it has. And so, variously people have used these, you can sort of find 10 papers on archive where people say, you can get better results from using one or other of these. You can also find papers where people said it made no difference for them versus just using a ReLU. So, I think basically, you can start off with a ReLU and work from there. Yes. So, parameter initialization, it's when, so, when we have these matrices and parameters in our model, it's vital, vital, vital, that you have to initialize those parameter weights with small random values. This was precisely the lesson that some people hadn't discovered when it came to final project time. So I'll emphasize it is vital, vital. So, if you just start off with the weights being zero, you kind of have these complete symmetries, right, that everything will be calculated the same, everything will move the same and you're not actually training this complex network with a lot of units that are specializing to learn different things. So, somehow, you have to break the symmetry and we do that by giving small random weights. So, you know, there's sort of some fine points. When you have biases, you may as well just start them at Zero, as neutral and see how the system learn the bias that you want et cetera. But in general, the weights you want to initialize to small random values. You'll find in PyTorch or other deep learning practi- packages, a common initialization that's used and often recommended is this Xavier Initialization. And so, the trick of this is that, for a lot of models and a lot of places, think of some of these things like these ones and these, you'd like the values in the network to sort of stay small, in this sort of middle range here. And well, if you kind of have a matrix with big values in it and you multiply a vector by this matrix, you know, things might get bigger. And then if you put in through another layer, it'll get bigger again and then sort of everything will be too big and you will have problems. So, really, Xavier Initialization is seeking to avoid that by saying, how many inputs are there to this node? How many outputs are there? We want to sort of temp it down the initialization based on the inputs and the outputs because effectively we'll be using this number that many times. It's a good thing to use, you can use that. Optimizers. Up till now, we saw, just talked about plain SGD. You know, normally plain SGD actually works just fine. But often if you want to use just plain SGD, you have to spend time tuning the learning rate, that alpha that we multiplied the gradient by. For complex nets and situations or to avoid worry, there's sort of now this big family and more sophisticated adaptive optimizers. And so, effectively they're scaling the parameter adjustment by accumulated gradients, which have the effect that they learn per parameter learning rates. So that they can see which parameters would be useful to move more and which one is less depending on the sensitivity of those parameters. So, where things are flat, you can be trying to move quickly. Where things are bouncing around a lot, you are going to be trying to move just a little so as not to overshoot. And so, there's a whole family of these; Adagrad, RMSprop, Adam, there are actually other ones. There's Adam Max and whole lot of them. I mean, Adam is one fairly reliable one that many people use and that's not bad. And then one more slide and I'm done. Yes, so learning rates. So, normally you have to choose a learning rate. So, one choice is just have a constant learning rate. You pick a number, may be 10 to the minus three and say that's my learning rate. You want your learning rate to be order of magnitude, right. If your learning rate is too big, your model might diverge or not converge because it just sort of leaps you around by huge cram movements and you completely miss the good parts of your function space. If your model, if your learning rate is too small, your model may not train by the assignment deadline and then you'll be unhappy. So, you saw that, you know, commonly people sort of try powers of 10 and sees how it looks, right. They might try, you know, 0.01, 0.001, 0.0001 and see, look at how the loss is declining and see what seems to work. In general, you want to use the fastest learning rate that isn't making things become unstable. Commonly, you could get better results by decreasing the learning rate as you train. So, sometimes people just do that by hand. So, we use the term epoch for a full pass through your training data and people might say, half the learning rate after every three epochs as you train and that can work pretty well. You can use formulas to get per epoch tra- learning rates. There are even fancier methods. You can look up cyclic learning rates online if you want, which sort of actually makes the learning rates sometimes bigger and then sometimes smaller, and people have found that that can be useful for getting you out of bad regions in interesting ways. The one other thing to know is, if you're using one of the fancier optimizers, they still ask you for a learning rate but that learning rate is the initial learning rate which typically the optimizer will shrink as you train. So, commonly if you're using something like Adam, you might be starting off by saying the learning rate is 0.1, so of a bigger number and it will be shrinking it later as the training goes along. Okay, all done. See you next week.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_15_Natural_Language_Generation.txt
So today we're gonna be learning about Natural Language Generation. And uh, this is probably going to be a little different to my previous lectures because this is going to be much more of a kind of survey, of lots of cutting edge, uh, research topics that are happening in NLG right now. So before we get to that, uh, we've got a few announcements. Uh, so I guess the main announcement is just, thank you all so much for your hard work. I know, um, the last week or two have been pretty tough. Uh, assignment five was really quite difficult, I think, and it was a challenge to do it in eight days. So we just really appreciate all the hard work you've put in. Um, we also understand the project proposal was, uh, sometimes a bit difficult to understand the expectations for some people. Um, so, yeah, these are both new components of the class this year that were not present last year. Um, so you know, we have to go through some learning curves as well as the teaching staff. So just we really want to say thank you so much, uh, for putting everything into this class. And please do continue to give us your feedback both right now and in the end of quarter feedback survey. Okay, so here's the overview for what we're going to be doing today. So today we're going to learn about what's happening in the world of neural approaches for Natural Language Generation. Uh, that is a super, super broad title, Natural Language Generation. Um, NLG encompasses a huge variety of research areas and pretty much each of those could have had their own lectures and we could have taught a whole, a whole quarter- quarter's worth of classes on, ah, NLG. Uh, but we're going to try to cover a selection of things today. And, um, uh, it's mostly going to be, uh, guided by the things which, uh, I've seen that I think are cool or interesting or exciting. So it's by no means going to be comprehensive but I hope you're going to enjoy some of the stuff we're going to learn about. Okay, so in particular we're going to start off by having a recap of what we already know about Natural Language Generation to make sure we're on the same page. And we're also going to learn a little bit extra about decoding algorithms. So we learned a bit before about, uh, greedy decoding and beam search decoding, but today we're going to learn some extra information about that and some other types of decoding algorithms. After that we're going to go through, um, a pretty quick tour of lots of different NLG tasks and a selection of neural approaches to them. And then after that we're gonna talk about probably the biggest problem in NLG research, which is NLG evaluation and why it is such a tricky situation. And then lastly, we're going to have some concluding thoughts on NLG research. What are the current trends and where are we going in the future? Okay. So, uh, section one, let's do a recap. Okay, so Natural Language Generation to define it just refers to any setting in which we are generating some kind of text. So for example, NLG is an important sub-component of lots of different tasks such as, uh, machine translation, we've already met, uh, abstracted summarization, we'll learn a bit more about that later, um, dialogue both chit-chat and task-based. Uh, also creative writing tasks such as writing stories and writing poems even. NLG is also a sub-component of, uh, free-form question answering. So I know a lot of you are doing the SQuAD project right now, uh, that is not an NLG task because you're just extracting the answer from the, uh, the source document. But there are other question answering tasks that do have a Natural Language Generation component. Uh, image captioning is another example of, uh, a task that has an NLG sub-component. So NLG is a pretty cool component of a lot of different NLP tasks. All right, let's go into our recap. So the first thing I want to recap is, uh, what is language modeling? Um, I've noticed that some people are a little bit confused about this, I think it, uh, might be because the name language modeling sounds like it might mean just simply encoding language like representing language using embeddings or something. So as a reminder language modeling, uh, has a more precise meaning. Language modeling is the task of predicting the next word given the word so far. So any system which produces this conditional probability distribution that does this task is called a Language Model. And if that language model, uh, system is an RNN, then we often abbreviate it as RNN-Language Model. Okay, so I hope, uh, you'll remember that. Uh, the next thing we're going to recap is do you remember what a Conditional Language Model is? Uh, the task of Conditional Language Modeling is when you're predicting, uh, what word's going to come next but you're also conditioning on some other input x as well as all of your words so far. So to recap some examples of conditional language modeling include, uh, machine translation where you're conditioning on the source sentence x, uh, summarization you're conditioning on your input text that you're trying to summarize. Dialogue, you're conditioning on your dialogue history and so on. Okay, uh, next we're going to quickly recap how do you train an RNN-Language model? I guess, it could also be a transformer-based language model or a CNN-based language model, now that you know about those, uh, and it could be conditional or it could be not. So the main thing I want to remind you about is that when you are training the system, then you feed in the target sequence that you're trying to generate so where it says target sentence from corpus, uh, that's saying that you have some sequence that you're trying to generate and you feed that into the decoder, the RNN-Language model. And then it predicts what words are going to come next. So the super important thing is that during training, we're feeding the gold, that is the reference target sentence into the decoder, regardless of what the decoder is predicting. So even if let's say this is a very bad decoder that isn't predicting the correct words, uh, it's not, you know, predicting them high at all, um, that doesn't matter we still just, um, input the targets- the gold target sequence into the decoder. And, um, I'm emphasizing this because it's going to come up later, uh, this training method is called Teacher Forcing. Which might be a phrase that you have come across elsewhere. So, yeah, it refers to the fact that the teacher, that is kind of like the gold input is- is forcing, uh, the language model to use that on every step instead of using its own predictions on each step. So that's how you train a RNN-Language model which might be conditional. Uh, okay. So now a recap on decoding algorithms. So, uh, you've got your trained language model which might be conditional. The question is how do you use it to generate a text? So the answer is you need a decoding algorithm. A decoding algorithm is an algorithm you use to generate the text from your trained language model. So, uh, in the NMT lecture a few weeks ago we learned about two different decoding algorithms. We learned about greedy decoding and beam search. So let's quickly recap those. Uh, greedy decoding is a pretty simple algorithm. On each step you just take what's the most probable words according to the language model. You could deal with the argmax and then use that as the next word, you feed it in as the input on the next step. And you just keep going until you produce some kind of END token or maybe when you reach some maximum length. And I think you're all quite familiar with this because you did it in assignment five. So uh, yes this diagram shows how greedy decoding would work to generate the sentence. So as we learned before, due to a kind of lack of backtracking and inability to go back if you made a wrong choice, uh, the output from greedy decoding is generally, uh, pretty poor like it can be ungrammatical, or it can be unnatural, kind of nonsensical. Okay, let's recap beam search decoding. So beam search is a search algorithm which aims to find a high probability sequence. So if we're doing translation that sequence is the sequence of translation words, um, by tracking multiple possible sequences at once. So the core idea is that on each step of the decoder, you're going to be keeping track of the K most probable partial sequences which we call hypotheses. And here K is some hyper- hyper parameter called the beam size. So the idea is by um, considering lots of different hypotheses we're going to try to search effectively for a high probability sequence but there is no guarantee that this is going to be the optimal, most high probability sequence. So, uh, at the end of beam search, uh, you reach some kind of stopping criterion which we talked about before but I won't cover in detail again. Uh, and once you've reached your stopping criterion, you choose the sequence with the highest probability, um, factoring in some adjustments for length and then that's your output. So just to do this one more time. Here's the diagram that we saw in the NMT lecture of beam search decoding um, once it's completed and in this scenario we have a beam size of two. So this is what it looks like after we've done this exploration problem, this shows the full tree that we explored, and then we've come to some kind of stopping criterion and we identify the top, uh, hypothesis and, uh, that's highlighted in green. So on the subject of beam search decoding, I was watching TV the other day, and I notice something in Westworld. I think the hosts- [LAUGHTER] the AI hosts in Westworld maybe used beam search. Which is something I wasn't expecting to see on TV. [LAUGHTER] So there's this scene, uh, Westworld is, by the way, a sci-fi series that has these, um, very convincing humanoid AI systems. Um, and there's a scene where one of the AI systems is confronted with the reality of the fact that, um, she, I suppose is, um, not human because she sees the generation system of words as she says them, and I was looking at the TV and I thought, is that beam search? Because that diagram looks a lot like this diagram here, um, but maybe with a bigger beam size. So, I thought, that was pretty cool because, you know, AI has hit the mainstream when you see beam search on TV. And then if you zoom in really hard you can see some other exciting words in this screenshot like knowledge base, forward chaining and backward chaining, identifies the same thing as forward prop and backward prop, um, and also fuzzy logic algorithms and neural net. Um, so yeah, beam search, I think, has hit the mainstream now, um, so it's good enough for Westworld, maybe it's good enough for us. Uh, so with beam search, right? We've talked about how you have this hyperparameter k or the beam size. And one thing we didn't talk about in the last lecture, so now we're leaving the recap portion, um, is what's the effect of changing that beam size k. So, uh, if you have a really small k, then you're gonna have similar problems to greedy decoding. And in fact, if k equals one, then you are actually just doing greedy decoding. So those same problems are, you know, ungrammatical, maybe unnatural, nonsensical, just kind of plain incorrect output. So once if we get larger k, if you have a larger beam size, then you're doing your search algorithm but considering more hypotheses, right? You're, you're having a larger search space and you're considering more different possibilities. So if you do that, then we often find that this reduces some of the problems above. So you're much less likely to have this ungrammatical, uh, you know, disjointed output. But there are some downsides to raising k. So of course, larger k is more computationally expensive and that can get pretty bad if you're trying to, um, for example, generate your, uh, outputs for a large, you know, test set of NMT examples. Um, but more seriously than that, increasing k can introduce some other problems. So for example, it's been shown that in NMT, increasing the beam size too much actually decreases the BLEU score. And this is kind of counter-intuitive, right? Because we were thinking of beam search as this algorithm that tries to find the optimal solution. So surely, if you increase k, then you're only going to find a better solution, right? Um, so I think maybe the key here is the difference between optimality in terms of the search problem that is finding a high probability sequence and BLEU score, which are two separate things, and there's no guarantee that they actually, um, correspond, right? And I mean, there's a difference, again, between BLEU score and actual translation, uh, quality as we know. So if you look at the two papers which I've linked to here which are the ones that show that, uh, increasing beam size too much decreases the BLEU score. They explain it by saying that the main reason why this happens is because when you increase the beam size too much, then you end up producing translations that are too short. So I mean, that kind of explains it to a degree that translations are too short, therefore they have low BLEU because they're probably missing words that they should contain. But the question is, why does large beam size gives you short translations? I think that's harder to answer. Wherever, in these two papers, I didn't see an explicit explanation of why. Um, I think it's possible larger kind of passing, we see sometimes with beam search which is when you really increase your, uh, search space and make the search much more powerful so that it can consider lots of different alternatives. It can end up finding these high probability, um, sequences which aren't actually the thing that you want. Sure, they're high probabili- probability but they're not actually the thing that you wanted. Um, so another example of that is that in open-ended tasks like for example chit-chat dialogue where you're trying to just, um, say something interesting back to your conversational partner, if we use a beam search with a large beam size, we find that that can give you some output that is really generic. Um, and I'll give you an example here to show you what I mean. So these are examples from a chit-chat, uh, dialogue project that I was doing. So here you've got, uh, your human chit-chat partner said something like I mostly eat a fresh and raw diet, so I save on groceries. And then here's what the chat bot said back depending on the beam size. I will let you read that. So I would say that this is fairly characteristic of what you see happening when you raise and lower the beam size [NOISE]. When you have a low beam size, um, it might be more kind of on topic. Like here, we can see that eat healthy, eat healthy, I am a nurse so I do not eat raw food and so on, that kind of relates to what the user said, uh, but it's kind of bad English, right? There's some repetition and, uh, it doesn't always make that much sense, right? Um, [NOISE] but then, when you raise the beam size, then it kind of converges to a safe so-called correct response but it's kind of generic and less relevant, right? And it's kind of applicable in all scenarios, what do you do for a living. Um, so the, the, the particular dataset I was using here is, uh, one called Persona-Chat, that I'll tell you more about later. Um, but it's a, it's a chit-chat dialog dataset where each, uh, conv- conversational partner has a persona which is a set of traits. Um, so the reason it keeps talking about being a nurse, I think is because it was in the persona. [NOISE] But the main point here is that, um, we kind of have an unfortunate trade off with no, with no Goldilocks zone that's very obvious. I mean, there's, there's a, yeah, kind of an unfortunate trade-off between having kind of bad, bad output, bad English and just having something very boring. So this is one of the problems that we get with beam, beam search. Okay. So we've talked about, uh, greedy decoding and beam search. Yes. So beam size depending on the [inaudible] The question is, can we have an adaptive beam size dependent on the position that you're in? You mean like in the sequence? Yeah. That is in [inaudible]. Yeah. I mean, I think I- I might have heard of a research paper that does that? That adaptively like raises the capacity of the, the hypothesis space. I mean, it sounds awkward to implement, uh, because, you know, things fitting into a fixed space in your GPU. Um, but I think that might be possible, I suppose you'd would have to learn the criterion on which you increase beam, beam size, yeah. Seems possible. Okay. So we've talked about, uh, beam search and greedy decoding. So here's a new family of decoding algorithms which are pretty simple, uh, sampling-based decoding. So something which I'm calling pure sampling because I didn't know what else to call it. Um, this is just the, the simple sampling method that says that on each, uh, timestep of your decoder t, you just want to randomly sample from the probability distribution, uh, to obtain your next word. So this is very simple. It's just like greedy decoding. But instead of taking the top words, instead just sample from that distribution. So the reason I call this pure sampling was to differentiate it from top-n sampling. And again, this is actually usually called top-k sampling but I already called k the beam size, and I didn't want to be confusing, so I'm gonna call it top-n sampling for now. Um, so the idea here is also pretty simple. On each step t, you want to randomly sample from your probability distribution but you're gonna restrict to just the top n most probable words. So this is saying that it's, it's like the simple, you know, pure sampling method but you want to truncate your probability distribution just to be, you know, the, the top most probable words. So, uh, the idea here kind of like how beam search, um, gave you a hyperparameter is kind of go between greedy decoding and, you know, uh, a very exhaustive search. In the same way here, you've got a hyperparameter n which can take you between greedy search and pure sampling. If you think about this for a moment, if n is one, then you would truncate it the top one. So you're just taking arg max which is greedy. And if n is vocab size, then you don't truncate it at all. You're sampling from everything, that's just the pure sampling method. So here, um, it should be clear, I hope, if you think about that if you increase n, then you're gonna get more diverse and risky output, right? Because you're, uh, giving it more, more to choose from and you're going lower into the probability distribution, going lower into less likely things. And then, if you decrease n, then you're gonna get more kind of generic safe output because you're restricting more to the most high probability options. So both of these are more efficient than beam search which I think is something important to note, uh, because there are no multiple hypotheses to track, right? Because in beam search, on every step t of the decoder, you've got k different, you know, beam size, many hypotheses to track. Uh, whereas here, at least if you're only generating one sample, there's only one thing to track. So it, it's a very simple algorithm. So that is one advantage of these sampling-based algorithms over beam search. Okay. So, the last thing I want to tell you that's kind of related to decoding is, uh, softmax [NOISE] temperature. So, if you recall on timestep t of your decoder, your language model computes some kind of probability distribution P_t, uh, by applying the softmax function to a vector of scores that you got from somewhere. Like from your transformer or from your RNN or something. So, there's the softmax function again. It's saying that the probability of a word W is this softmax function, uh, given, given the scores. So, the idea here of a temperature on the softmax is that you have some kind of temperature hyperparameter tau and you're going to apply that to this, uh, softmax. So, all that we're doing is we're div- dividing all of the scores, or logits you might call them, by the temperature hyperparameter. So again, if you just think about this a little bit, you'll see that raising the temperature, that is increasing, uh, the hyperparameter, this is going to make your probability distribution more uniform. And this kind of comes down to the question about when you, when you multiply all of your scores by a constant, um, how does that affect the softmax, right? So, do things get more far apart or less far apart once you take the exponential? So, this is something you can just work up by yourself on paper, but as a, uh, a kind of a memory shortcut, a good way to think about it is that if you raise the temperature, then the distribution kind of melts and goes soft and mushy and uniform. And if you, uh, lower the temperature, like make it cold then, the probability distribution becomes more spiky, right? So, like the things which are rated as high probability become like even more, uh, disproportionately high probability compared to the other things. Um, I think that's a easy way to remember it. Today I had to work it out on paper and then, uh, I realized that just the, the, the temperature visualization thing usually gets me there quicker. So, um, one thing I want to note is that softmax temperature is not a decoding algorithm. I know that I put it in the decoding algorithm section, uh, that was just because it's kind of a thing, a simple thing that you can do at test time to change how the decoding happens, right? You don't need to train, uh, with the, the softmax temperature. So, it's not a decoding algorithm itself. It's a technique that you can apply at test time in conjunction with a decoding algorithm. So, for example, if you're doing beam search or you're doing some kind of sampling, then you can also apply a softmax temperature, um, to change, you know, this kind of risky versus safe, um, trade-off. Any questions on this? Okay. So, here's a summary of what we just learned about decoding algorithms. Um, Greedy decoding is a simple method. It gives kind of low quality output in comparison to the others, at least beam search. Beam search, especially when you've got a high beam size, uh, it searches through lots of different hypotheses for high-probability outputs. And this generally is gonna deliver better quality than greedy search, uh, but if the beam size is too high, then you can have these, uh, kind of counter-intuitive problems we talked about before. Where you've retrieved some kind of high-probability but unsuitable output. Say, something is too generic or something is too short. And we're gonna talk about that more later. Uh, sampling methods are a way to get more diversity, uh, via, via randomness. Uh, well, getting randomness might be your goal in itself. Um, so, this is good if you want to have some kind of, for example, open-ended or creative generation setting like, uh, generating poetry or stories, then sampling is probably a better idea than beam search because you want to have a kind of source of randomness to, uh, write different things creatively. And top-n sampling allows you to control the diversity by, uh, changing n. And then lastly, softmax temperature is another way to control diversity. So there's quite a few different knobs you can turn here. And it's not a decoding algorithm, it's just a technique that you can apply alongside any decoding algorithm. Although it wouldn't make sense to apply it with greedy decoding because even if you make it more spiky or more flat, the argmax is still the argmax, so it doesn't make sense. Okay. Cool. I'm going to move on to section two. So, uh, section two is NLG tasks and neural approaches to them. Uh, as mentioned before, this is not going to be an overview of all of NLG. That will be quite impossible. This is gonna be some selected highlights. So, in particular, I'm gonna start off with a fairly deep dive into a particular NLG task that I'm a bit more familiar with, and that is, uh, summarization. So, let's start off with a task definition for summarization. Um, one sensible definition would be: Given some kind of input text x, you want to write a summary y which is shorter than x and contains the main information of x. So, summarization can be single-document or multi-document. Uh, single-document means that you just have a summary y of a single document x. In multi-document summarization, you're saying that you want to write a single summary y of multiple documents x_1 up to x_n. And here typically x_1 up to x_n will have some kind of overlapping content. So, for example, they might all be different news articles from different newspapers about the same event, right? Because it kind of makes sense to write a single summary that draws from all of those. Um, makes less sense to summarize things that are about different topics. There is further, uh, subdivision of, uh, task definitions in, in summarization. So, I'm gonna describe it via some datasets. Uh, here are some different really common datasets especially in, uh, neural summarization, um, and they kind of correspond to different, like, lengths and different styles of text. So, a common one is, uh, the Gigaword dataset. And the task here is that you want to map from the first one or two sentences of a news article to write the headline. [NOISE] And you could think of this as sentence compression, especially if it's kind of one sentence to headline because you're going from a longish sentence to a shortish headline style sentence. Uh, next one that I, um, wanted to tell you about is this, uh, it's a Chinese summarization dataset but I, I see people using it a lot. And it's, uh, from a micro-blogging, um, website where people write summaries of their posts. So, the actual summarization task is you've got some paragraph of text and then you want to, uh, summarize that into, I think, a single sentence summary. Uh, another one, uh, two actually, are the New York Times and CNN/Daily Mail, uh, datasets. So, these ones are both of the form, you've got a whole news article which is actually pretty long like hun-hundreds of words and then you want to summarize that into, uh, like, maybe a single-sentence or multi-sentence summary. Uh, The New York Times ones are written by, I think, uh, librarians or people who, who, um, write summaries for, for library purposes. Uh, and then, uh, one I just spotted today when I was writing this list is there's a new, fairly new like last six months dataset from wikiHow. So, from what I can tell this seems to be, you've got a full how-to-article from wikiHow and then you want to boil this down to the summary sentences which are kind of cleverly extracted from throughout the wikiHow article. They are kind of like headings. So, um, I looked at this paper and it seems that, um, this is kind of interesting because it's a different type of text. As you might have noticed most of the other ones are news-based and this is, uh, not, so that kind of poses different challenges. Uh, another kind of division of summarization is sentence simplification. So, this is a related but actually different task. In summarization, you want to write something which is shorter and contains main information but is still maybe written in just as complex language, whereas in sentence simplification you want to rewrite the source text using simpler, uh, simpler language, right? So, like simpler word choices and simpler sentence structure. That might mean it's shorter but not necessarily. So, for example, uh, simple Wiki- Wikipedia is a standard dataset for this. And the idea is you've got, um, you know, standard Wikipedia and you've got a simple Wikipedia version. And they mostly align up, so you want to map from some sentence in one to the equivalent sentence in the [NOISE] other. Another source of data for this is Newsela which is a website that, uh, rewrites news for children. Actually, at different learning levels I think. So, you have multiple options for how much it's simplified. Okay. So, um, so that's the definition or the many definitions of summarization as different tasks. So, now I'm gonna give an overview of, like, what are the main, uh, techniques for doing summarization. So, there's two main strategies for summarization. Uh, you can call them extractive summarization and abstractive summarization. And the main idea as I had hinted out earlier, is that in extractive summarization you're just selecting parts of the original texts to form a summary. And often this will be whole sentences but maybe it'll be more granular than that; maybe, uh, phrases or words. Whereas abstractive summarization, you're going to be generating some new text using NLG techniques. So the idea is that it's, you know, generation from scratch. And my visual metaphor for this is this kind of like the difference between highlighting the parts with a highlighter or writing the summary yourself with a pen. I think the high level things to know about these two techniques are that extractive summarization is basically easier, at least to make a decent system to start, because selecting things is probably easier than writing text from scratch. Um, but extractive summarization is pretty restrictive, right? Because you can't really paraphrase anything, you can't really do any powerful sentence compression if you can only just select sentences. Um, and, of course, abstractive summarization as a paradigm is more flexible and it's more how humans might summarize, uh, but as noted it's pretty difficult. So, I'm gonna give you a very quick view of what pre-neural summarization looks like. And here we've got, uh, this is a diagram from the, uh, Speech and Language Processing book. So, uh, pre-neural summarization systems were mostly extractive. And like pre-neural NMT, which we learnt about in the NMT lecture, it typically had a pipeline which is what this picture is showing. So, a typical pipeline might have three parts. First, you have content selection which is, uh, essentially choosing some of the sentences from the source document to include. And then secondly, you're going to do some kind of information ordering which means choosing what order should I put these sentences in. And this is particularly a more nontrivial question if you were doing multiple document summarization because your sentences might come from different documents. Uh, and then lastly, you're going to do a sentence realization that is actually, um, turning your selected sentences into your actual summary. So, although we're not doing, kind of, free-form text generation, uh, there might be some kind of editing for example like, uh, simplifying, editing, or removing parts that are redundant, or fixing continuity issues. So for example, you can't refer to a person as she if you never introduced them in the first place. So maybe you need to change that she to the name of the person. So in particular [NOISE] uh, these pre-neural summarization systems, uh, have some pretty sophisticated algorithms of content selection. Um, so, for example, uh, you would have some sentence scoring functions. This is the most simple, uh, way you might do it, is you might score all of the sentences individually and you could score them based on features such as, um, are there, you know, topic keywords in the sentence? If so, maybe it's an important sentence that we should include. Um, and you could compute those, uh, keywords using, uh, statistics such as tf-idf for example. [NOISE] You can also use pretty basic but powerful features such as, uh, where does the sentence appear in the document? If it's near the top of the document, then it's more likely to be important. Uh, there are also some more complex content selection algorithms such as for example, uh, there are these graph-based algorithms which kind of view the document as a set of sentences and those sentences are the nodes of the graph, and you imagine that all sentences, er, sentence pairs have an edge between them, and the weight of the edge is kind of how similar the sentences are. So, then, if you think about the graph in that sense, then now you can try to identify which sentences are important by finding which sentences are central in the graph. So you can apply some kind of general purpose gla- graph algorithms to figure out which [NOISE] nodes are central, and this is a way to find central sentences. Okay. So um, [NOISE] back to summarization as a task. Um, we've, I can't remember if we've talked about ROUGE already. We've certainly talked about BLEU. But I'm gonna tell you about ROUGE now which is the main automatic metric for summarization. So ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. I'm not sure if that was the first thing they came up with or if they made it like that to match BLEU. Um, and here's the, here's the equation, uh, for, well, I suppose one of the ROUGE metrics. I'll tell you more about what that means later and you can read more in the original paper which is linked at the bottom. So, uh, the overall idea is that ROUGE is actually pretty similar to BLEU. It's based on n-gram overlap. So, some main differences with BLEU are ROUGE doesn't have a brevity penalty. Um, I'll talk more about that in a minute. Uh, the other big one is that ROUGE is based on recall while BLEU is based on precision. So you can see it's there in the title. [NOISE] Um, so, if you think about this a little bit, I think you can say arguably precision is more important for machine translation. That is, you only want to generate text that appears in one of your reference, uh, translations, and then to avoid taking a really conservative strategy where you only generate really safe things in a really short translation. That's why you add the brevity penalty to make sure that [NOISE] it tries to write something long enough. And then by contrast, recall is more important for summarization because you want to include all the information, the info- the important information in your summary, right? So the information that's in the reference summary is, uh, assumed to be the important information. So recall means that you captured all of that. Um, and I suppose i- if you assume that you have a maximum length constraint for your summarization system, then those two kind of give a trade-off, right? Where you want to include all the information but you can't be too long as a summary. So I think that's the kind of justification why you have recall and precision for these two different tasks. However, confusingly, often an F1, that is combination of precision and recall version of ROUGE is reported anyway in the summarization literature. And to be honest, I'm not entirely sure why this is, uh, maybe it's because of the lack of, uh, explicit max length constraint. Um, anyway, I, I tried to search that but I couldn't find an answer. So here's some more information on ROUGE. Um, if you remember, BLEU is reported as a single number, right? BLEU is just a single number and it is a combination of the precisions for the different n-grams which is usually 1-4 whereas ROUGE scores are usually reported separately for each n-gram. So, the most commonly reported ROUGE scores are ROUGE-1, ROUGE-2 and ROUGE-L. So, ROUGE one, not to be confused with Rogue One: A Star Wars Story. Um, I feel like since that film came out, I see so many people mistyping this, and I think it's related. Um, so, ROUGE-1 is, uh, based on unigram overlap, um, [NOISE] and ROUGE-2 based on bigram overlap. It's kind of an analogy to BLEU really except, uh, recall-based, not precision-based. The more interesting one is ROUGE-L which is longest common subsequence overlap. Um, so, the idea here is that you are interested not only in, uh, particular n-grams matching up but in, you know, how many, uh, how, how long a sequence of words can you find that appear in both. So you can, uh, read more about these metrics in the paper that was linked on the previous page. And another really important thing to note is there's [NOISE] now a convenient Python implementation of ROUGE, and um, maybe it is not apparent why that's exciting, but it's actually pretty exciting because for a long time, there was just this Perl script, um, that was quite hard to run and quite hard to set up and understand. So um, someone out there has been a hero and has, uh, implemented a pure Python version of ROUGE and checked that it really does match up to the Perl script that people were using before. So if any of you are using ROUGE or doing summarization for your projects, uh, make sure that you, uh, go use that because it will probably save you some time. [NOISE] Okay. So we're gonna re- return to ROUGE a little bit later. Um, I know that in assignment 4 you thought about the shortcomings of BLEU as a metric and um, for sure ROUGE has some short- shortcomings as well as a metric for summarization. Um, we're gonna come back to that later. Okay. So, we're gonna move on to neural approaches for summarization. [NOISE] So uh, going back to 2015, I don't have another dramatic reenactment, I'm afraid. [NOISE] Um, Rush et al. published the first seq2seq summarization paper. [NOISE] So uh, they were viewing this as, you know, NMT has recently been super successful, why don't we view abstractive summarization as a translation task and therefore apply standard translation seq2seq methods to it. So that's exactly what they did and they applied, uh, a standard attention model, and then they did a pretty good job at, uh, Gigaword summarization. That's the one where you're, um, converting from the first sentence of the news article to the headline. So it's kind of like, uh, sentence compression. So crucially, this is kind of the same order of magnitude of length as NMT, right? Because NMT is sentence to sentence and this is kind of sentence to sentence, maybe at most two sentence two sentence. So this works pretty well and you can get pretty decent, um, headline generation or sentence compression using this kind of method. [NOISE] Okay. So after that, since 2015, there have been lots more developments in neural abstractive summarization. And you can kind of um, group together these developments in, uh, a collection of themes. So one theme is make it easier to copy. Uh, this seems pretty obvious because in summarization, you know, you're gonna want to copy every, quite a few words and even phrases, but don't copy too much. Uh, that's the other thing is that if you make it too easy to copy, then you copy too much. So, then there's other research showing how to prevent too much copying. [NOISE] Uh, the next thing is some kind of hierarchical or multi-level attention. So as I just showed, the attention has been pretty key to, um, abstractive neural summarization so far. So there's been some work looking at, you know, can we kind of make this attention work at a more kind of high-level, low-level cost fine version so that we can kind of maybe do our selection at the high-level and at low-level. Another thing which is kind of related is having some more kind of global content selection. So if you remember when we were talking about the, the pipelines pre-neural summarization, they had these different content selection algorithms. And I think you can say that, um, kind of naive attention, attention seq2seq is not necessarily the best way to do content selection for summarization, maybe you want a more kind of global strategy where you choose what's important. It's not so apparent here when you're doing this small-scale summarization, but if you imagine that you're summarizing a whole news article and you're choosing which information, kind of deciding on each decoder step, what to choose doesn't seem like the most global strategy. Er, what else have we got? Uh, there's using, uh, Reinforcement Learning to directly maximize ROUGE or other discrete goals you might care about such as maybe the length of the summary. Um, and I say discrete here because ROUGE is a non-differentiable, uh, function of your generated outputs. There's no, you know, easy way to differentiably learn that during training in the usual way. Uh, my last point on this list is the kind of theme of resurrecting pre-neural ideas such as those graph algorithms that I mentioned earlier and working them into these new seq2seq abstractive neural systems and I'm sure there is more as well. So, I'm gonna show you a few of these, um, especially because even if you're not particularly interested in summarization, a lot of the ideas that we're gonna explore here are actually kind of applicable to other areas of NLG or just other areas of NLP deep learning. So, the first thing on the list is making it easier to copy, which seems like probably the first thing you want to fix, if you've just got basic seq2seq with attention. So, um, a copy mechanism, which can exist outside of summarization. The reason, why you want this is that basic seq2seq with attention, they're good at writing fluent output, as we know, but they are pretty bad at copying over details like rare words correctly. So a copy mechanism is just the kind of sensible idea of saying, um, let's have an explicit mechanism to just copy over words. So for example, you could use the attention distribution to- to kind of select what you're going to copy. Um, so, if you are allowing both copying over words and generating words in the usual way with your language model, then now you've got a kind of hybrid extractive/abstractive approach to summarization. So, there are several papers, which are- which propose some kind of copy mechanism variants and I think, the reason why there is multiple is because there's kind of a few different choices you can make about how to implement this, and that means that there's a few different versions of how to implement copy mechanism. So, uh, yeah, there are several papers here which you can look at. I'm going to show you a diagram from a paper that um, I did a few years ago with Chris. So, this is just one example of how you can do a copying mechanism. So, the - the way we did it, is we said that on each decoder step, you're going to calculate this probability Pgen and that's the probability of generating the next word rather than copying it, and the idea is that this is computed based on your current kind of context, your current decoder hidden state. So, then once you've done that, then the idea is you've got your attention distribution as normal and you've got your kind of output, you know, generation distribution as normal and you're going to use this Pgen, which is just a scalar. You can use that to kind of, uh, combine, mix together these two probability distributions. So, what this equation is telling you, is that saying that the uh, final output distribution for uh, what word is gonna come next, it's kind of saying, you know, it is the probability of generating times your probability distribution of what you would generate but then also the probability of copying and then also what you're attending to at that time. So, the, the main thing is, you're using attention as your copying mechanism. So, attention is kind of doing double-duty here. It's both uh, being useful for the generator to, you know, uh, maybe choose to rephrase things but it is also being useful as a copying mechanism. And I think that's one of the several things that these different papers do differently. I think, I've seen a paper that maybe has like two separate uh, attention distributions, one for the copying and one for the attending. Um, other choices you can make differently are for example, D1 Pgen to be this kind of soft thing that's between zero and one or do you want it to be a hard thing that has to be either zero or one. Um, you can also make decisions about like do you want the Pgen to have supervision during training? Do you want to kind of annotate your data set saying these things are copied, things, these things are not, or do you want to just like learn it end-to-end? So there's multiple ways you can do this and um, this has now become pretty, pretty standard. Okay, so copy mechanism seems like, seems like a sensible idea but there's a big problem with them, which is what I mentioned earlier and that problem is, that they copy too much. Um, so, when you- when you run these kind of systems on summarization, you find that they end up copying a lot of long phrases and sometimes even whole sentences and uh, unfortunately your dream of having an abstractive summarization system, isn't going to work out because your, um, you know, copy augmented seq2seq system has just collapsed into a mostly extractive system, which is unfortunate. Another problem with these uh, copy mechanism models is that they are bad at overall content selection especially if the input document is long, and this is what I was hinting at earlier. Um, let's suppose, that you are summarizing something that's quite long like a news article that's hundreds of words long and you, you want to write a several sentence summary. It doesn't seem like the kind of smartest choice to on every step of writing your several sentence summary, but you're choosing again what to attend to, what to select, what to summarize. It seems better to kind of make a global decision at the beginning and then summarize. So, yeah, the problem is, there's no overall strategy for selecting the contents. So, uh, here's a paper that I found. Nope, not yet. Okay. So, how might you do better content selection for neural summarization? So, if you remember in this pre-neural summarization we looked at, you had completely separate stages in the pipeline, right? You had the content selection stage and you had a surface realization that is the text generation stage. But in our seq2seq attention systems, these two stages are just completely mixed together, right? You're doing your step-by-step surface realization that is text generation, and then on each of those, you're also doing content selection. So, yeah, this doesn't make sense. So, I found a paper, which is, uh, published I think last year, which gives a quite nice kind of simple solution to this problem and it's called bottom-up summarization. So, in this paper if you look at the- if you look at the figure, uh, the main idea is pretty simple. It says that, first you're going to have a content selection stage and this is just uh, thought of as a neural sequence tagging model problem, right? You run through your source documents and you kind of tag every word as include or don't include. So, you're just kinda deciding like what seems important, what seems like it should make it into the summary and what doesn't and then the bottom-up attention stage says that, now you'll seq2seq with an attention system, which is gonna generate the summary. Are you're gonna kind of apply a mask? You know, apply a hard constraint that says, that you can't attend to words that were tagged don't-include. So, this turns out to be pretty simple but effective um, because it's a better overall content selection strategy because by doing this first content selection stage by sequence-tagging you're kind of just, just doing the selection thing without also at the same time doing the generation thing, which I think turns out to be a better way to make better decisions about what to include and then separately, this also means as a great side effect, you have less copying of long sequences in the generation model. Um, because if you are not allowed to attend to things, which you shouldn't be including, then it's kind of hard to copy a really long sequence, right? Like if you want to copy a whole sentence but the sentence has plenty of don't include words in it, then you can't really copy a long sequence, you have to break it up. So, what the model ends up doing, is it kind of has to skip, skip around the parts that is meant to include and then it's forced to be more abstractive to put the parts together. Yep. How did they backpropagate the masking decision because it seems like- Because during training [inaudible] masking decision. Yeah, I think it might be trained separately. I mean, you can go and check the paper. I've, I've read a lot of papers in the last days, I can't quite remember. I think, it might be trained separately but they might have tried training it together but it didn't work as well. I am not sure. You can check it out. Okay. So, another paper I want to tell you about is a paper which uh, used reinforcement learning to directly maximize ROUGE for neural summarization. So this was a paper from two years ago. And the main idea is that they can use RL to directly optimize in this case ROUGE-L, the metric. So by contrast, the standard maximum likelihood of training that is the training objective we've been talking about for the whole class so far for language models uh, that can't directly optimize ROUGE-L because it's a non-differentiable function. So they uh, they use this RL technique to compute the ROUGE score during training and then uh, use a reinforcement learning to backprop to the model. So, the interesting finding from this paper is that if they just used the RL objective, then they do indeed get higher ROUGE scores. So they can successfully optimize this ROUGE-L metric that they were aiming to optimize but the problem is that when you do that, you get lower human judgment scores. So, on the right we're seeing that the RL only model has actually pretty pretty bad readability relevance human judgment scores. It's worse than just the maximum likelihood supervised training system. So, this is a quote from their blog post that says, "We have observed that our models with the highest ROUGE scores also generated barely readable summaries." So, this is- this is, um, I suppose a problem, right? If you try to directly optimize for the metric, then you might start finding that you're kind of gaming the metric and not optimizing for the true task, right, because we know, just as we know that BLEU was not really a perfect analogy to actual translation quality so is ROUGE not a perfect analogy to uh, summarization quality. But they did do something cool, which is that they found that if you combine the two objectives, so they kind of, uh, you know, predict the language model sequence objective and then they also like produce an overall summary that gets a high ROUGE score objective and you combine them together, then you can get a better human uh, judgment score, which in the end is the closest thing we have to uh, a measure of actual summarization quality. [NOISE] Okay. So, I'm gonna move on to uh, dialogue, which is um, a different NLG, kind of family of tasks. Uh, so, really dialogue encompasses a really large variety of settings. And we are not going to cover them all, but here is a kind of overview of all the different kinds of tasks that people might mean, when they say dialogue. Um, so, there's task-oriented dialogue and this kind of refers to any setting, where you're trying to kind of get something done in the conversation. So, if for example, you've got kind of assistive tasks where it's assumed that you have, you know, maybe the uh, the dialogue agent is trying to help a human user to do something like maybe giving customer service or recommendations, answering questions, helping a user, you know, accomplish a task like buying or booking something. Uh, these are the kinds of tasks, which the virtual systems on your phone can do or can kind of do. Um, another family of task-oriented dialogue tasks are cooperative tasks. So, this is kind of anything where you've got two agents who are trying to solve a task together via dialogue. Um, and the opposite of that would be adversarial. So anything where you have two agents who are trying to compete in a task and that uh, competition is conducted through dialogue. [NOISE] So uh, the opposite to task-oriented dialogue is, uh, social dialogue. So that's something where there is no explicit task other than to, I suppose socialize. So chit-chat dialogue, um, is just dialogue where you're just doing it for social fun or for company. Um, I've also seen some work on kind of like therapy or mental well-being dialogue, I'm not sure if this should go in task or social, it's kind of a mix, uh, but I suppose these are the ones where the goal is to maybe offer kind of emotional support to the human user. Um, so as a very kind of brief overview of how, uh, the deep learning, uh, renaissance has kind of changed dialog research, um, I think you can say that in kind of pre-deep learning, uh, the difficulty of open-ended, free-form natural language generation, meant that, uh, dialogue systems were often, uh, not doing free-form NLG. They might use predefined templates meaning that you have a template where you just fill in some slots with the content, uh, or maybe you retrieve an appropriate response from a corpus of responses that you have in order to find, you know, an appropriate response for the user. And these are by no means simple systems, they had some very complex things going on like deciding, you know, what their dialogue state is and what template you should use and so on and the- all the natural language understanding components of understanding the context so far. But, uh, one effect that, that deep learning had is that, uh, since again kind of 2015 which is when NMT, uh, became standard, there's been, uh, just like summarization, lots of papers applying seq2seq methods to dialogue. And this has kind of led to a renewed interest in open-ended, free-form dialogue systems. So uh, if you wanna have a look at what did those early seq2seq dialogue papers look like, um, here's two kind of early ones like maybe the first ones to apply seq2seq. Okay. So uh, people quickly applied seq2seq, uh, NMT methods to dialogue but it quickly became very apparent that this kind of naive application of standard NMT methods has some serious pervasive deficiencies when applied to a task like chitchat dialogue. And this is even more true than it was for summarization. So what are some examples of these serious pervas- pervasive deficiencies? Uh, one would be genericness or boring responses, and I'll go into more detail about these in a moment. Another one is irrelevant responses. So that's when, uh, the dialogue agent kind of says something back that's just kind of unrelated to what the user says. Um, another one is repetition, this is pretty basic but it, uh, it happens a lot. Um, so that's also repetition within the utterance and maybe repetition across utterances. Ah, another difficulty is, uh, kind of lack of context, like not remembering the conversation history. Obviously, if you do not condition on the whole conversation history, there's no way your dialogue agent can use it but it is a challenge especially if you have a very long dialogue history to figure out how to condition on it effectively. Another problem is the lack of consistent persona. So if you kind of, uh, naively as in maybe those two papers that I referenced on the previous slide, if you naively train a kind of standard seq2seq model to maybe take the, uh, you know the user's last utterance and then say something back, or maybe even the whole dialogue history and say something back. Often your dialogue agent will have this completely inconsistent persona, like one moment they will say that it lives in Europe and then it'll say it lives in, I don't know, China or something and it just doesn't make sense. So I'm gonna go through, uh, some of these problems and give you a bit more detail on them. So first, this irrelevant response problem. So in a bit more detail, your problem is that seq2seq often generates some response that's kind of unrelated to the user's utterance. So it can be unrelated because it's simply generic, which means that this is kind of like an overlap with a generic response problem or it can be kind of unrelated because the model's choosing to kind of change, to change the subject to something unrelated. So one solution of many, there, there are a lot of different papers which, uh, kind of attack this irrelevant response problem, uh, but just one, one for example is, uh, that you should tr- change the training objective. So instead of trying to optimize, um, mapping from input S to response T such that you're maximizing the conditional probability of T given S, instead you should maximize the maximum mutual information. So that's why this is here. So maximum mutual information, uh, you can kind of rewrite the objective like this, and if you want to see some more detail you can go look at this paper here. But the idea is that you're trying to find your response T that kind of, uh, maximizes this thing which is kind of like saying, it needs to be probable given the inputs but kind of like as a ratio of its probability in itself. So if T is very high likelihood, then it gets penalized and it's kind of like about the ratio of the probability given the input and it's just the stand-alone probability. So the idea is that this is meant to discourage, um, just saying generic things that just have a high PT by themselves. Um, so that's the irrelevant response problem. And as I just hinted at, there's, uh, definitely a strong link between the irrelevant response problem and the kind of generic or boring response problem. So to look at the genericness or boring response problem. [NOISE] So I think there are some pretty easy fixes that you can make to, to a degree ameliorate the boring response problem. Whether you're really getting to the heart of the issue is a different question. But some kind of easy test-time fixes that you can certainly do are for example, you can just directly up-rate, up-weight rare words during beam search. So you can say, all rare words kind of get a boost to their, uh, log probabilities and then now we're more likely to produce them during beam search. Another thing you could do is you could use for example, a sampling decoding algorithm rather than beam search and we talked about that earlier, um, or you could use, oh yeah, you could use softmax temperature as well. That's another thing. So those are kind of test-time fixes and you could regard those as a kind of late intervention, right? So an earlier intervention would be maybe training your model differently. So I'm calling these kind of conditioning fixes because these fixes kind of relate to, uh, conditioning your model on something that's gonna help it be less boring. So one example is maybe you should condition the decoder on some kind of additional context. Uh, so for example, there's some work showing that, you know, if you're doing chitchat dialogue, then maybe you should, uh, go and sample some related words that are related to what the user said and then just kind of attend to them when you generate and then you're more likely to say something that's kind of content full and interesting compared to the boring things you were saying before. Ah, another option is you could train a retrieve-and-refine model rather than a generate-from-scratch model. So by retrieve-and-refine, I mean, uh, you've- supposing you have some kind of corpus of, of just general kind of utterances, things that you could say and then maybe you sample one, uh, from that test set, th- the training set, and then you edit it to fit the current situation. So it turns out that this is a pretty strong method to produce much more kind of diverse and human-like and interesting utterances, um, because you can get all of that kind of fine grain detail from the sampled, ah, utterance and then you edit it as necessary to fit your current situation. So I mean, there are downsides to these kinds of methods like maybe it can be hard to edit it to actually appropriately fit the situation, um, but it's certainly a way to effectively get like some more diversity and, um, interest in that. So on the subject of the repetition problem, that was another kind of major problem we noticed for, um, applying seq2seq to, uh, chitchat. Um, again, there are kind of simple solutions and more complex solutions. Um, so a simple solution is you could just block repeating n-grams during beam search and this is usually really quite effective. And what I mean by that is, uh, during beam search when you're kind of considering, you know, what are my K hypotheses? Which is just kind of the top K in the probability distribution, you say, well, anything that would constitute a repeating n-gram just gets thrown out. So when I say constitutes a repeating n-gram, I mean if you did take that word, would you now be creating a repeating let's say two-gram, bigram and, um, if we're deciding that we're banning all repeating bigrams or trigrams or whatever, then you essentially just have to check for every possible word that you might be looking at in beam search and whether that would create a repeating n-gram. So this works pretty well, I mean, it's by no means a kind of principled solution, right? If feels like we should kind of have a better way to learn not to repeat, um, but as a kind of, uh, effective hack, I think that works, that works pretty well. So the more complex solutions are, for example, you can train something called coverage mechanism. Um, so in seq2seq, and this is mostly, uh, inspired by the machine translation setting, uh, a coverage mechanism is a kind of objective that prevents the attention mechanism from attending to the same words multiple times or too many times. And the intuition here is that, uh, maybe repetition is caused by repeated attention. So if you attend to the same things many times, then maybe you're gonna repeat, you know, the same output many times. So if you prevent the repeated attention, you prevent the repeated output. So this does work pretty well but it's definitely, um, more of a complex thing to implement, it's less convenient and, um, I don't know, in some settings, it does seem like the simple solution is, uh, easier and works just as well. Uh, so other complex solutions might be you could define a training objective to discourage repetition. Uh, this cou- you could try to, um, define something differentiable but one of the, the difficulties there is that because you're training with a teacher forcing, right? Where you're always like looking at the, the gold inputs so far, then you never really do the thing where you generate your own output and start repeating yourself. So it's kind of hard to define the penalty in that situation. So maybe this needs to be a kind of non-differentiable function. So kind of like how, um, the Paul et al paper was, uh, optimizing for ROUGE, maybe we kind of, uh, optimize for not repeating which is a discrete function of the input. Uh, I'm going to skip ahead to storytelling. So in storytelling, uh, there's a lot of interesting neural storytelling work going on right now. And most of it uses some kind of prompt to write a story. So for example, uh, writing a story given an image or given a writing prompt or writing the next sentence of the story given the story so far. So, uh, here's an example of generating a story from an image. And what's interesting here is that we have this image which is a picture of what appears to be an explosion and then here you have a story about the image but written in the style of Taylor Swift lyrics. So it says, you have to be the only light bulb in the night sky I thought, oh god, it's so dark out of me that I missed you, I promise. And what's interesting here is that there wasn't any straightforward, supervised, you know, image-captioning data set of explosions and Taylor Swift lyrics. Um, they kind of learned this, uh, separately. So how they did this is that they used a kind of common sentence encoding space. So they used this particular kind of sentence encoding called skip-thought vectors and then they trained, um, this COCO image-captioning, uh, system to go from the image to the encoding of the sentence and then separately they also trained, uh, a language model, a conditional language model to go from the sentence-encoding to the Taylor Swift lyrics. And then because you had this shared encoding space, you can now put the two together and then go from the picture, to the embedding, to the Taylor Swift style output, which I think is pretty, pretty amazing. Wow, I've really lost, lost track of the time. So I, I think I have to hurry up quite a lot. So, um, we've got some really impressive story, generation systems, recently, um, and this is an example of, uh, a system which, uh, prepares a new data set, where you write a story given a prompt, and they made this very impressive, very beefed-up, uh, convolutional language model, seq-to-seq system that generates the story given the input. I'm not gonna go through all these details, but I encourage you if you want to check out, uh, what's the state of the art in story generation, you should check this out. There's a lot of different interesting things going on with very fancy attention and convolutions and so on, and they managed to generate some really interesting, um, impressive stories. So here, if you look at this example, we've got some really interesting, um, kind of, uh, story generation that's kind of diverse, it's non-generic, it's stylistically dramatic which is good, and is related to the prompts. Um, but I think you can see here kind of the limits of what the state of the art story generation system can do which is that- um, although it's kind of in style, it's mostly kind of atmospheric and descriptive. It's not really moving the plot forward. There's no kind of events here, right? Um, so the problem is it gets even worse when you generate for longer. When you generate a long, a long text, then it will mostly just stay on the same idea without moving forward with new ideas. Okay. So I'm gonna skip forward a lot and, uh, sorry, ought to have planned better. There's a lot of information here which you wanna check out about poetry generation and other things. I'm going to skip ahead because I want to get to the NLG evaluation section because that's pretty important. So, um, we've talked about Automatic Evaluation Metrics fr NLG, and we know that these words overlap based metrics, such as BLEU, and ROUGE, and METEOR, uh, we know they're not ideal for machine translation. Ah, they're kind of even worse for summarization mostly because summarization is even more open-ended than machine translation. And that means that having this kind of rigid notion, if you've got to match the N-grams, is even less useful. And then for something even more open-ended like dialogue, then it's just kind of a disaster. It's not even a metric that gives you a good signal at all, and this also applies to anything else open-ended, like story generation. So it's been shown, and you can check out the paper at the bottom, that word overlap metrics are just not a good fit for dialogue. So the orange box is showing you, uh, some plots of the correlation between human score on a dialog class and BLEU-2, some variation of BLEU. And the prob- the problem here is you're not seeing much of a correlation at all, right? It seems that particularly on this dialogue setting, ah, the correlation between the BLEU metric and the human judgment of whether it's a good dialogue response is, uh, the correlation is- I mean, it looks kind of non-existent. It's at least very weak. So that's pretty unfortunate and there's some other papers that show much the same thing. So you might think, "Well, what other automatic metrics can we use? "What about perplexity? Um, so perplexity certainly captures how powerful your language model is, but it doesn't tell you anything about generation. So for example, if your deca- decoding algorithm is bad in some way, then perplexity is not gonna tell you anything about that, right? Because decoding is something you apply to your trained language model. Perplexity can tell if you've got a strong language model or not, but it's not gonna tell you, um, necessarily how good your generation is. So some other thoughts you might have about automatic evaluation are, well, what about word embedding based metrics? Uh, so the main idea with word embedding based metrics, uh, you want to compute the similarity of the, the word embeddings or maybe the average of the word embeddings across a sentence, not just the overlap of the words themselves. Um, so the idea is that rather than just being very strict and saying only the exact same word counts, you say, "Well, if the words are similar and in word embedding space, then they count." So this is certainly more flexible, but unfortunately, uh, the same paper I showed before shows that this doesn't correlate well either with human judgments of quality, at least for the- the dialogue task they are looking at. So here, the middle column is showing the correlation between human, judgments, and some kind of average of word embedding based metric. So, um, yeah, that doesn't look great either, not a great correlation. So if we have no automatic metrics to adequately capture overall quality for natural language generation, um, what, what can we do instead? So I think often the strategy is, you end up defining some more kind of focused automatic metrics to capture the particular aspects of the generated text that you might be interested in. Um, so for example, you might be interested in, uh, fluency, and you can compute that by just kind of running a well-trained language model over your text and generating the probability, and that's kind of a proxy for how well it's written, you know, good, fluent, grammatical text. Um, if you're particularly interested in maybe generating text in a particular style, then you could ta- take a language model that's trained on the corpus representing that style, and now the probability tells you not only is it a good text, but is it in the right style. Um, there are some other things as well that are like, you know, diversity, um, and you can can that pretty easily by just having some statistics about, you know, how much you're using rare words. Um, relevance to input, you can kind of compute a similarity score with the input, and there are just some simple things like, you know, length and repetition that you surely can count, and yes, it doesn't tell you overall the overall quality, but these things are worth measuring. So I think my main point is that yes, we have a really difficult situation with NLG evaluation. There's no kind of overall metric. Often, they capture this overall quality. Um, but if you measure lots of these things, then they certainly can help you track some important things that you should know. So we talked about how automatic evaluation metrics for NLG are really tough. So let's talk about human evaluation. Uh, human judgments are regarded as the gold standard, right? But we already know that human evaluation is slow and expensive, uh, but are those the only problems with human eval? Let's suppose that you do have access, uh, to, let's say, the time or money you need to do human evaluations. Um, does that solve all your problems? Suppose you have unlimited human eval, does that actually solve your problems? And my answer is, uh, no. And this is kinda from personal experience. Um, conducting human evaluation in itself is very difficult to get right. It's not easy at all, and this is partially because humans do a lot of weird things. Humans, uh, unlike a metric, uh, an automatic metric, they're inconsistent, they could be illogical. Sometimes, they just get bored of your task, and they don't really pay attention anymore. Uh, they can misinterpret the question you asked, and sometimes they do things they can't really explain why they did it. So, um, as a kind of case study of this I'm going to tell you about, um, a project I did where I was, uh, building some chatbots, and it turned out that the human evaluation was kind of the hardest part of the project. So I was trying to build these chatbots for the Persona-Chat data set and in particular investigating controllability. So we're trying to control aspects of the generated texts such as, you know, whether you repeat itself, how generic you are, kind of these same problems that we noted before. So we built these models that control, you know, specificity of what we're saying and how related what we're saying is to what the user said. So here you can see that, you know, uh, our partner said something like, "Yes, I'm studying law at the moment," and we can kind of control- turn this control knob that makes us say something very generic like, "Oh," and then like 20 dots or something just completely bonkers that's just all the rare words you know. And there's like a sweet- a sweet spot between what you say, "That sounds like a lot of fun. How long have you been studying?" And then similarly, we have a knob we can turn to, uh, determine how semantically related what we say is to what, what they said. So, um, you know, that's kind of interesting. It's, it's a way to control the output of the, uh, NLG system. But actually, I want to tell you about how the human evaluation was so difficult, so we have these systems that we wanted to generate using human eval. So the question is, how do you ask for the human quality judgments here? Uh, you can ask kind of simple overall quality questions, like, you know, how well does the conversation go? Was- was the user engaging? Um, or maybe comparative, Which of these users gave the best response? Uh, questions like this. And, you know, we tried a lot of them, but there were just major problems with all of them. Like, these questions are necessarily very subjective and also, the different respondents have different expectations, and this affects their judgments. So for example, if you ask, do you think this user is a human or a bot? Then, well, that depends entirely on this respondents' knowledge of bots or opinion of bots and what they think they can do. Another example is, you'd have kind of catastrophic misunderstanding of the question. So for example, if we ask, was this user- was this chatbot engaging? Then someone responded saying, "Yup, it was engaging because it always wrote back", which clearly isn't what we meant. We meant like are they an engaging conversation partner, but they took a very literal assumption, uh, of, of what engaging means. So the problem here is that overall quality depends on many underlying factors, and it's pretty hard to kind of find a single, overall question that captures just overall quality. So we ended up doing this, we ended up breaking this down into lots more kind of factors of quality. So, uh, the way we saw it is that, you have maybe these kind of overall measures of quality of the chatbot, such as how engaging was it, how enjoyable was it to talk to, and kind of maybe how convincing was it that it was human. And then below those, we kind of broke down as these more low level, uh, components of quality such as, you know, uh, were you interesting? Were you li- showing that you were listening? Were you asking enough questions and so on? And then below that, we had these kind of controllable attributes which were the knobs that we were turning and then the goal was to figure out, um, how these things affected the output. Um, so let's see. Um, so we had a bunch of findings here, and I think, maybe the ones which I will highlight were, uh, these two kind of in the middle. So the overall metric engagingness, which means enjoyment, that was really easy to maximize. It turned out, uh, our bots managed to get near human performance in terms of engagingness. Um, but the overall metric humanness, that is the kind of Turing test metric, that was not at all easy to maximize. All of our bots were way, way below humans in terms of humanness, right? So we were not at all convincing of being human, and this is kind of interesting, right? Like, we were as enjoyable as talk to as humans, but we were clearly not human, right? So like, humanness is not the same thing as conversational quality. And one of the interesting things we found in this, um, study, where we not only evaluated our chatbots, we also actually got humans to evaluate each other, was that, um, humans are sub-optimal conversationalists. Uh, they scored pretty poorly on interestingness, fluency, listening. They didn't ask each other enough questions, and this is kind of the reason why we managed to like approach human performance in kind of enjoyableness to talk to you because we just, for example, turned up the question asking knob, asked more questions, and people responded really well to that because people like talking about themselves. So, um, yeah. I think this is kind of interesting, right? Because it shows that there is no obvious just one question to ask, right? Because if you just seemed, "Oh, the one question to ask is clearly engagingness or it's clearly humanness, then we would have gotten completely different reads on how well we were doing, right? Whereas asking these multiple questions kind of gives you more of an overview. I am going to skip this just because there's not a lot of time. Okay. So, here's the final section. Uh, this is my kind of wrap-up thoughts on NLG research, the current trends and where we're going in the future. So, here's kind of three exciting current trends to identify in NLG. And of course your mileage may vary, you might think that other things are more interesting. So, uh, the ones which I was thinking about, are firstly incorporating discrete latent variables into NLG. Um, so, you should go check out the slides I skipped over because there were some examples of this. But the idea is that with some tasks such as for example storytelling or task oriented dialogue where you're trying to actually get something done. Um, you probably want a more kind of concrete hard notion of the things that you're talking about like you know, entities and people and events and negotiation and so on. So, uh, there's, there's mentioning what kind of modeling these discrete latent variables inside these continuous, uh, NLG methods. The second one is alternatives to strict left to right generation. And I'm really sorry [LAUGHTER] I skipped over so many things. Um, so, there's some interesting work recently in trying to generate text in ways other than left to right. So, for example there's some kind of parallel generation stuff or maybe writing something and iteratively refining it, uh, there's also the idea of kind of top-down generation, um, for especially longer pieces of text like maybe tried to decide the contents of each of the sentences separately before uh, writing the words. And then a third one is like alternatives to maximum likelihood training with teacher forcing. So, to remind you, a maximum likelihood training with teacher forcing is just the standard method of training a language model that we've been telling you about in the class so far. Um, so, you know, there's some interesting work on looking at more kind of holistic, um, sentence level rather than word level objectives. Uh, so, unfortunately I ran out of time with this slide, and I didn't have time to put the references in but I will put the references in later and it will be on the course website so you can go check them out later. Okay. So, as a kind of overview, NLG research, where are we and where are we going? Um, so my metaphor is I think that about five years ago NLP and deep learning research was a kind of a Wild West. Right? Like everything was new and um, we were unsure, NLP research weren't sure what kind of what the new research landscape was because uh, you know, uh, neural methods kind of changed machine translation a lot, looked like they might change other areas but it was uncertain how much. Um, but these days you know five years later, um, it's a lot less wild. I'd say, you know things are settled down a lot kind of standard practices have emerged and sure there's still a lot of things changing. Um, but you know there's more people in the community, there's more standard practices, we have things like TensorFlow and PyTorch. So, you don't have to take up gradients anymore. So, I'd say things are a lot less wild now but I would say NLG does seem to be one of the wildest parts remaining and part of the reasons for that is because of the lack of evaluation metrics that makes it so difficult to tell what we're doing. It's, uh, quite difficult to identify like what are the main methods that are working when we don't have any metrics that can clearly tell us what's going on. So, another thing that I'm really glad to see is that the neural NLG community is rapidly expanding. Um, so, in the early years, uh, people were mostly transferring successful NMT methods to various NLG tasks. Uh, but now I'm seeing you know, increasingly more inventive NLG techniques merging which is specific to the non-NMT generation settings. Um, and again I urge you to go back into the slides that I skipped. Um, so, I'm also saying there's increasingly more kind of neural NLG workshops and competitions especially focusing on open-ended NLG like those tasks that we know are not well suited by the automatic metrics that work for NMT. So, there's a neural generation workshop, a storytelling workshop uh, and various challenges as well where people enter their for example, um, conversational dialogue agents to be, um, evaluated against each other. So, I think that these different, um, kind of community organizing workshops and competitions are really doing a great job to kind of organize a community, increase reproducibility and standard evaluate, standardized evaluation. Um, so, this is great but I'd say the biggest roadblock to progress is definitely still evaluation. Okay. So, the last thing that I want to share with you is eight things that I've learned from working in NLG. So, the first one is the more open-ended the task, the harder everything becomes. Evaluation becomes harder, defining what you're doing becomes harder, telling when you're doing a good job becomes harder. So, for this reason constraints can sometimes make things more welcome. So, if you decide to constrain your task then sometimes it's easier to, to complete it. Uh, the next one is aiming for a specific improvement can often be more manageable than aiming to improve overall generation quality. So, for example, if you decide that you want to well for example increase diversity for your model, like say more interesting things that's an easier thing to achieve and measure than just saying we want to do overall generation quality because of the evaluation problem. The next one is if you're using your language model to do NLG, then improving the language model that is getting better with perplexity will give you probably better generation quality because you've got a stronger language model but it's not the only way to improve generation quality, as we talked about before, uh, there's also other components that can affect generation apart from just language model, and that's part of the problem is that that's not in the training objective. Um, my next tip is that you should look at your output a lot, partially because you don't have any single metric that can tell you what's going on. It's pretty important to look at your output a lot to form your own opinions. It can be time consuming but it's probably worth doing. I ended up talking to these chatbots a huge amount during the time that I was working on the project. Okay. Almost done, so, five you need an automatic metric, even if it's imperfect. So, I know you that already know this because we wrote it all over the project instructions. Uh, but I'd probably amend that to like maybe you need several automatic metrics. I talked earlier about how you might track multiple things to get an overall picture of what's going on, I'd say the more open-ended your NLG task is, the more likely you need probably several metrics. If you do human eval, you want to make the questions as focused as possible. So, as I found out the hard way if you define the question as a very kind of overall vague thing, then you're just opening yourself up to, um, the respondents kind of misunderstanding you and, uh, if they are doing that then it's actually not their fault, it's your fault and you need to take your questions and that's what I learned. Uh, next thing is reproducibility is a huge problem in today's NLP and deep learning in general, and the problem is only bigger in NLG, I guess it's another way that it's still a wild west. So, I'd say that, uh, it would be really great, if everybody could publicly release all of their generated output when they write NLG papers. I think this is a great practice because if you released your generated outputs, then if someone later let's say comes up with a great automatic metric, then they can just grab your generated output and then compute the metric on that. Whereas if he never released your output or you released with some kind of imperfect metric number, then future researchers have nothing to compare it against. Uh, so lastly, my last thought about working in NLG is that it can be very frustrating sometimes, uh, because things can be difficult and it's hard to know when you're making progress. But the upside is it can also be very funny. So this my last slide, here are some bizarre conversations that I've had with my chatbot. [LAUGHTER] Thanks. [NOISE] [LAUGHTER] All right, thanks.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2020_BERT_and_Other_Pretrained_Language_Models.txt
Okay. So I'm gonna talk about, um, BERT and also some, uh, kind of precursor work and then some follow-up work that's happened in- in the last year and- well, not follow up, but, uh, more- more recent advancements, uh, that's happened, uh, since then. So, uh, first, we're gonna talk about history and background. So, um, everyone knows and loves word embeddings in NLP, right? They're kind of the basis for, uh, why neural networks, um, work for NLP. Uh, because neural networks work in a continuous space, uh, vectors, and- and matrices, and obviously, text is a discrete space and so there needs to be something to bridge the gap. And, uh, it turns out that the thing to bridge the gap, it's actually pretty simple. It's just a lookup table from, um, each, from a set of discrete vocabulary to, uh, a vector that's learned discriminatively end to end, right? So originally these were just learned, uh, like in- in, uh, the original Bengio 2003 neural language model paper, these were just trained discriminatively end to end, um, and these were actually, and- and so then, people would train language models and then use these pre-trained usi- using the embedding layer as pre-trained representations, um, for- for other tasks. But they wouldn't use the rest of the language model, they would just use the embedding layer. And then, uh, word2vec and GloVe and stuff came along where then people found a much cheaper, much more scalable way to train it. Where you can just use the, uh, statistics of a- a corpus where it's just a linear model so you don't have to- to compute these expensive feedforward layers that you're gonna throw out anyways, uh, and so you can scale up to billions of tokens on a single, uh, CPU, right? So the problem though is that these word embeddings are, uh, applied in the context free manner, right? So- so for like a kind of a simple toy example, the word bank, if you say, open a bank account and on a river bank, it's gonna be the same embedding. So people have tried to do stuff like, uh, word sense embeddings where it's not just a single word, it's a- it's a full word sentence. But this kind of bank example, it's a little bit of a toy example, right? Most- almost any word has a different meaning depending on the context. Um, it's very- so- so even- even like open the bank account and I went to the bank. Tho- those are still semi-different senses of the word bank, one of them is a- I mean they have different part of speech tags kind of, uh, well, I guess, not really. But like they're kind of- they're kind of using different senses. Right? And so, um, yes. So- so- so we really need a contextual representation. Right? So we want something where it's a representation of a word after it's been put into the- the context of the sentence that we've seen it in, right? Which would be like at the bottom here. So kind of for a history of contextual representations, the first big paper for this type of contextual representations was a paper, uh, from Google, uh, in 2015 called Semi-supervised Sequence Learning from Andrew Dai and Quoc V. Le. And so in this one, uh, it was actually very similar to- to- to papers that came after it. It didn't get as much attention for- for various reasons. So, but basically, they- they had some classification task like, uh, sentiment classification on movie reviews. And they had a big corpus of movie reviews. And so then they said, what happens if we just take our existing LSTM model, and instead of just using pre-trained embeddings, which everyone has already been doing since, like at least, uh, for the like- actually probably since 2003, um, people had been using pre-trained embeddings. But they- they said, let's actually pretrain the entire model as a language model, and then let's fine tune it for our classification test. And they got pretty good results,: but not like stellar results. And so now we know that the reason why they didn't get stellar results is they didn't train it on enough data. And they- because they basically train on the same corpus that they were training on and they trained the same size model that they were training on, which we now know it needs to be bigger. But that's kind of- uh, this was- this was already kind of a little bit ahead of its time, um, partially because like, you know, stuff wasn't, like, we didn't have as much compute back then, even though it was like five years ago, uh, and it would have been, you know, more expensive. So, uh, and then in, uh, 2017, ELMo came out which was, uh, from the University of Washington and AI2. And so this one, they- they did something pretty clever where they took, um, you train a language on a big corpus, so they trained it on a billion word corpus. And they trained a- a big model, an LSTM with 4,000 hidden dimensions, which is quite expensive. And they trained a bidirectional model. So, but- but it was kind of weakly bidirectional where they trained a left-right model and then, a left to right model. And then they concatenated the two. And they called these contextual pretrained embeddings. And so the idea behind ELMo is that this doesn't actually change your existing model architecture. You kind of take whatever task-specific model architecture that you have, which could be, you know, for, uh, question answering. It might be some sort of fancy model where you do a LSTM over the source and over- over the question and over the answer. Then you- then you attend to one another and in whatever kind of architecture you have. And just wherever you would have put in GloVe embeddings before, now you put in ELMo embeddings. Um, and so this got state of the art on, you know, everything at the time, question-answering, semantic parsing, syntactic parsing because it was- and- and so if you just took any existing kinda state-of-the-art model. You could fit in- put in ELMo and banks and get state of the art, right. Uh, but- but they weren't, uh- but these were kinda- the models were kinda fixed. Um, and so then after that, uh, OpenAI published Improving Language Understanding With Generative Pre-training, which is, uh, simply called GPT1. And so in this, they took a- just- just- just like a similarly large corpus, about a billion words and they trained a very large language model. So a 12 layer language model, which at the time was maybe I- I don't know whether it was actually a large language model they'd been training at the time. It certainly was the largest thing they all had been training on that much data for a- a kind of open-source, um, model. And when I first read it, I actually thought that it was like too big. Like not- not that it was worse, but that they were kinda just showing up and showing how big of a model they could train. But now we know that actually this- this- this depth that they had was actually kinda the crucial- the crucial element. So they- they did something that was like fairly simple, right? They just trained a language model, a very large one, and then they just fine tuned it by taking the last token and then fine tuning it for a classification task, right? So is this positive or negative? And they got basically state-of-the-art on lots of different classification tasks. Um, but [NOISE] I'm going to- I'm going to actually take a, uh, kind of an aside here before I go into BERT, um, which is about Transformer, because that was the other, uh, kind of big thing like the- the big precursor that, uh, allowed BERT and GPT to work well, right? So, um, [BACKGROUND] BERT and GPT both use a transformer, which I'm sure you guys have learned about. And so, uh, I- I don't need to really go into all the details about it. But, um, so it has, you know, multi-headed attention feedforward layers layering on- I- I won't go into all the details because- because I think you've already learned about it. But- so the- the big thing about why this kinda took over is, uh, there is really two advantages versus the LSTM. One is that there's no locality bias. And so, um, long distance context has a equal opportunity to short distance context, which is important. So for like normal language understanding, that the- the locality bias of LSTM is generally considered to be a good thing, um, because local context is more relevant than long-distance contexts. But the way that GPT and BERT and other models work is that they actually concatenate, uh, context. And so if you have a, uh, like a model that says, does this sentence- does sentence one entail sentence two? The way that it was done historically, meaning like before GPT was that you would like encode them both, let's say with an LSTM, then you would do attention from one to the other. With a transformer, you can just put them into the same sequence and, uh, give them separate sequence embeddings or add a separator token. And it will learn how to, uh, and then it can, it can attend to the- to its own sentence locally. But it can also attend all the way to the other sentence, uh, for almost- for- for no- it's just as easy for it to attend all the way to the other sentence. And so when you do this, kind of you can just pack everything to a single sequence and then, uh, everything will be learned rather than- rather than having to do with this part of the model- model architecture, which is- ends up being a pretty important, uh, thing about simplifying these models. And so the other thing is, uh, that having a, with transformers- with LSTMs, let's say this- this is a batch and these are the- the words in the batch. So we have- we have two sentences and four words per sentence. Every step has to be computed, um, one at a time. So you only get a batch size of- of two effectively. And so on modern hardware which is TPUs and GPUs, the bigger the matrix multiplication, the better it is. And you- you want all three dimensions to be big. So even if you have big hidden layers, um, your batch size dimension will still be small unless you have a huge batch, but then that's too expensive for long sequences. But with transformers, uh, it's the tot- because it's- it's layer wise attention. Um, the total number, the batch size is the total number of words. So if you have 500 words and then 32 sentences, it's actually 32 times 512 is the total batch size. So you get these huge matrix multiplications and you can take advantage of modern hardware. Uh, and so that's kind of why the transformer has taken over because of these two things. And that's why it was used in GPT and why it's used in BERT. So, uh, now I'm gonna talk about BERT. So the problem with the previous models, being ELMo and GPT and, um, those before it, is that the language models only used left context or right context, or- or a concatenation of both. But really, um, the language understanding is bidirectional, right? So there's this clear, uh, kind of mismatch between why is- why did everyone train on unidirectional models where you can only see to the left or only see to the right when really we care- when we know that in order to understand language, you need to look in both directions, right? So there's two reasons. So one is that, language models historically had been used for, uh, typically as features in other systems. So like the most direct application of language modeling would be like predictive text, right? Which is directly just saying predict the next word. The other- the other applications that are actually more common are to use them in a machine translation system or a speech recognition system, where you have these features like translation features or- or acoustic features. And then you add a language model that says, what's the probability of the sentence? And so for this, you want it to be a well-formed distribution. For these pre-trained models, we actually don't care about this. But this was kind of something that was, uh, kinda, people had just been like, uh, kind of, I guess, fixed on this idea that language models have to- have to have a distribution or probability distribution even though we actually don't care about that. But the other kind of bigger reason is that words can see themselves in a bidirectional encoder. Um, and so what this means is when you build a representation incrementally. [NOISE] So you have your input and then you have your output, and it's always offset by 1. So we can- we have the- the- the startup sentence token, we predict the first word, then we feed in the second word- we feed in the first word and predict the second word. And so we can encode the sentence once and predict all the words in the sentence with the unidirectional model. And so this gives us good sample efficiency, right? Because if we have a 512 dimension, like- like a sequence of 500 words, we don't wanna have to, uh, only predict one word because it's gonna be 500 times, uh, as much, uh, compute to get the same amount of predictions. But if we were to just trivially do a bidirectional LSTM or transformer, we would have a situation where you- you encode your sentence, every- everything is bidirectional. And so after the first layer, everything can see itself. So- so- so this word open, there's a- there's a path back down to open. And so it's trivial to predict a word that can- that can, where it's in the input also, right? There's no- there's no actual prediction going on there. So the simple solution, which is basically the whole crux of BERT is that let's, instead of, um, training a normal language model, let's just predict, mask out k percent of the words. So, uh, the man went to the mask to buy a mask of milk. And so- and so now you can run a bidirectional model on that, and because the words aren't in the input, you can't cheat, right? The downs- the- the- the- the- and so th- the downside of this is that you're not getting as many predictions per sentence, right? You're only getting- predicting 15% of words instead of 100% of words. But the upside is that you're- you're getting a much more rich model because you're seeing in both directions, right? So this value of k is a hyperparameter that we have to, uh, just decide on empirically, so we use 15%. It turns out that that's actually kind of an optimal value people have, so we and also people since then have done, um, more thorough ablation experiments, um, and found that this 15% is good. So the reason- the reason for doing a certain percent over another is that if yo- if you were to do, let's say 50% masking, you would get way more predictions but you would also mask out like all of your context. And so, um, you can, uh, if- if you mask out all of your context, and you're not getting any- you can't learn contextual models. And if you only do, like, let's say you can mask out one word that would pro- that might be optimal, maybe, um, but you would have to do way more data processing. So it'd be way more expensive to train, and we know that these models are basically just compute bounded. Um, so if you just have enough data, you can just kinda train them infinitely and it'll always do better. So it's really just a trade-off between these two- these two things. Um, so one other little detail in part, which turned out to be super important, [NOISE] is that because the mask token is never seen at fine-tuning time, uh, instead of always, um, replacing a word with the mask token as in this case, uh, we would randomly sometimes predict it with a random word and sometimes keep the same word. So like, uh, so 10% of the time, we'd say, we went to the store and went to the running, right? And so we- we wouldn't tell the model which- which case was which. Uh, we would- we would just have it- we would just have to- we wo- we wo- we would just say what- what should this word be, right? And didn't know whether it's right or not. So it could be the same word. So because 10% of time it's the same word and it could be a random word. And so it has to, uh, basically be able to maintain a good representation of every word because it is- it doesn't know whether it's really the right word. So it has to actually look at every word and figure out whether this is the right word. So we could potentially even just get away with not using a mask token at all and just doing like this 50% of the time and this 50% of the time. But the reason for not doing that is that, uh, you know, then we'd be corrupting a lot of our data and we don't want it to necessarily corrupt the data because the fact that this is the wrong word, we might mess up our predictions for some other word over here, right? So whereas the mask token at least it knows that it's not the right word, so it doesn't- it doesn't use that as part of its context. Um, so the other- the other kind of, uh, detail of BERT, which also now in subsequently may not be- been that important, is that a lot of these tasks that we're doing, we're not just learning words, we're lo- we- we're- want to predict the relationship between sentences. So for question answering in particular, we have a- a query which is a- yeah, generally a sentence, and then we have a, uh, answer which is a paragraph or- or a sentence or a document, and we want to- um, you know, say, does this answer the question? So by doing that, we, um, so we want to have some pertaining task that actually does a sentence level, uh, prediction rather than just a word level prediction. So- so the way that we did this, which wa- and we needed this to have like an infinite amount of data, right? Or we can just generate an infinite amount of data. So we do- we don't want this to be an annotated task. Um, so the way that we did this is we, uh, just did a next sense prediction task where we just took two sentences from the same corpus, uh, from the same document, and- and 50% of the time they're from the same document, 50% of the time, they're from a random document. And then we just said, was this the real next sentence, uh, or not? And so if you just have, like, "The man went to the store. He bought a gallon of milk." That is the next sentence. If you said, "The man went to the store. Penguins are flightless." That's not the next sentence. So basically, now we're forcing the model at pre-training time to actually make- to look at the full sentences and then make some sort of sentence level prediction and we hope that this, uh, is kind of generalizedw. So it's just something like question-answering, where you have a question and answer as Sentence A, Sentence B. Uh, so in terms of our input representation, it looks pretty similar to a normal transformer. But we have these additional embeddings, which are called segment embeddings. So a normal transformer, you would have your input and then you would do WordPiece segmentation, right? Where you split up. Where you apply this unsupervised, uh, splitting of words into, um, kind of, um, morphological splits but they're usually often not morphological, sort of unsupervised. But you end up with something tha- that's roughly morphological, right? And so now you have like no- no added vocabulary tokens. Everything is represented, a- at least at the- at the very least, you always split into characters, uh, so we use a 30,000 word vocabulary. And then we have our token embeddings. Then we have our- our normal position embeddings, which is the bottom and so these are part of the transformer where- because transformers, unlike LSTMs, don't have any sort of, um, locational, uh, awareness. So the only- the way to encode that is that you encode a actual embedding for every position. So this is called absolute position embeddings. There's other kind of techniques nowadays. Uh, and so- and then- then you have the segment embedding, which is, this is Sentence A or Sentence B. And so this kind of generalizes in more, uh, general context. So you can imagine if you're saying, we're trying to say, like, you're trying to do like web search. You might say, here's my query, here's the- the title, here's the URL, here's the document content. Um, and so you can kind of just pack these all into a single sequence and then just give them different, uh, segment embeddings or type embeddings. So that now you get, uh, are able to have, um, a much stronger- um, you're- you're able to kind of just represent everything, er, in this kind of same single sequence where you kind of just differentiated by just this single embedding that's different. And this is all of course learned. Um, and so this is in contrast to kinda the older style where you would typically have a different encoder for every part. So like you would have a different coder for the query and then maybe for the title and maybe for the URL, um, but this case it's all just a single sequence. So we trained on about a 3 billion word corpus, which was at the time large. Now it's not actually that big compared to what people are training on. We used a- a batch size, uh, which was also large. Um, we trained for about 40 epochs of the data. And we trained these two models, which are still relatively large. Um, so one of them is a 12 layer 768, and then the other one, 24 layer 1024. So at the time this was basically like, one of the largest models that had been trained. Although now people are training models that I think are, uh, 30 times or more bigger than this, um in- in the more recent papers. So things have kind of exploded in terms of, uh, compu- com- computing in the last, I don't know, three years. Um, but, uh, yeah, so the fine-tuning procedure is, uh, it- it's pretty straightforward, right? So we- we pre-trained this model for these two tasks. And so now we have an input sequence which has multiple sentences with different type embeddings. We feed them through the- uh, our transformer model. And now we have the special embedding, which I- I think I've been- didn't mention. So this- this special embedding. This is- this is basically, it's learned to predict the- the next sentence prediction task and then this is used, uh, also for a classification task. But this- it's not just that we're using the embedding, right? We are- we're fine tuning the entire model, right? So- so this- so it's- it's really not that this embedding is intrinsically useful or that the word embeddings are intrinsically useful. It's that the- the- the- weights inside the entire 12 or 24 layer model are useful. And so by fine-tuning the entire model, you can kind of pick out the salient parts that are- that are important for some downstream task. Um, and so this is- this is the- kind of the- the- the class-specific fine-tuning, right? So if we had a single classification task, uh, like, let's say sentence analysis, um, where he says say, is it a positive or negative review? We- we- we encode our sentence with the- the BERT model. Then the only parameters that we add are this final output matrix, right? So maybe if we have 3, let's say, positive, negative or neutral, this might be 1,000 times 3, right? So it's just 3,000 parameters, and 300 million. So a- a- 3,000 new parameters and 300 million old parameters and we- and we jointly train all 300 million plus 3,000, uh, fo- fo- for this downstream task. But because the vast majority of them are pre-trained, we can kind of adapt to it in only like a few thousand, labeled examples. And similarly for a sentence pair class, we do, um, we just concatenate the two sentences with different type embeddings. So we have, if you want to say does this sentence entail this other sentence, you say sentence A, uh, you put it, concatenate sentence B, and then also predict from- from this token and fine tune the entire thing. So similarly, very few additional parameters. Uh, for span prediction tasks, you just have kinda a start of- a start of span, end of span, so you're only adding a few thousand new parameters. And then for tagging tasks like part-of-speech tagging, you just, um, have a single, uh, sentence. You- you have a- you have- you add ev- every single token or maybe every token except for the- the- the WordPieces. But like, [NOISE] you- you- the it's kind of preprocessing. You- you predict, um, what's the part of speech of this. And so this is really why, uh. So like BERT itself is really a- a kind of, I would say an incremental improvement over what already just, so it kind of took, uh, transformers, uh, ELMo, GPT, really these three ideas and kinda made a- a pretty simple, uh, change on top of them. And but the reason why it had such big impact is not just the numbers that I'll show in a few slides. It's really this- this thing, because, um, with Elmo, there was really no fundamental difference between, it was just contextual embedding, right? You still had, because like a lot of deep learning historically has been fun and building new models, right? So you have all of these components that are kinda like Lego blocks, right? You have attention layers, feed forward layers, layered normalization, uh, LSTMs, uh, et cetera. And you can just, kind of figure out to say okay, for this new task, how do I glue these together in a way that's best, right? And so, um, and so with ELMo, it wasn't really- it didn't really change anything fundamentally. It was just- because this would have- if you just fed it into your existing model and you got state of the art, for GPT1, it wasn't most- like these things didn't really work, right? Because it was a left to right language model. And so you can just kind of take the last token, and then predict classification task, but it didn't really make any sense to predict like part-of-speech tags because for the first word there's no context, so it makes no sense to predict where there is no context. Um, with BERT, the reason why it had such high impact, was because it kind of simplified things. And so that's not- I'm not saying that's necessarily a good thing because as a researcher or a bad thing, I'm- so as a researcher kind of ironically, the ultimate goal of research is often like research yourself out of a job, right? Like, it's like, you know, if a- if a physicist, I'm not saying BERT had anyone near this- this impact, like a physicist that came up with a grand theory of physics. That would kinda like- they would be like the- the greatest moment in physics, but also that would kind of, uh, eliminate a lot of research, right? And so that's kind of like the- the end goal of research is kind of solve the problem, right? So- so BERT, um, kind of has- is- is a step where now, like all of these different concepts and problems, there's really- it kind of killed like a lot of the need to do, um, model architecture design, which is kind of unfortunate because that's like really fun. Um, and so that's kind of the- the impact. I'm so- I'm not gonna say whether that's a good or bad impact. It's kinda like the objective impact. And that's why it's had so much impact is because it has kind of, uh, had this effect on now, so many things that used to be like designing fun, you know, models. It's just fit it in and use one of these four recipes. And it kinda just works for all of these different tasks. Um so in terms of actual empirical results, so these- these are at the time the paper was published. Of course, things have gotten better since then. So these- these- this GLUE task is a set of, um, they're all kind of similar in that they're all sentences pair or sentence classification tasks. So like MultiNLI would be something like hills and mountains are especially sanctified in Jainism. And then hypothesis is Jainism hates nature. That's a contradiction, right? So in order for an NLP model to be able to understand this, or to be able to answer this correctly and give us the label of contradiction, it needs to know that hills and mountains are part of nature, sanctified is a good thing. And that hating something is a bad thing and be able to do all of this reasoning, right? So it's pretty complicated reasoning. Similarly for CoLa, you have to be able to say like the wagon rumbled down the road versus the car honked down the road. And so, um, these things are, ah, you know, one of them to a- to a native English speaker sounds totally fine, the other one sounds weird. And so that, that similarly, and neither of these have very much data, right, so you have to be able to generalize on- on only, ah, like a few thousand examples. Um, so BERT base, which is the same size as Open AI, uh, it- it significantly beat OpenAI, uh, which was the previous state of the art. And then, uh, the BERT large which was bigger, of course got better results. Which is only surprising that it got better results across the board, including on the very, very tiny datasets, it's only a few thousand examples. That was kind of- that was kind of more interesting result rather than just the fact that, uh, it got better results. Because historically when, uh, there was, you know, rules of thumb about if you have some number of examples, how do you design the model size that's optimal for that? And so if you don't do pre-training, like if- if you keep making the model bigger without pre-training, eventually you'll get worse results because your model will overfit your training data. With pre-training you basically only ever do like one pass over the data anyways. So there- there seems to be almost no limit to how big you can make it and still get good results even with a tiny amount of fine tuning data. And that's really like one of the- one of the big takeaways. So I'm not going- yeah so, uh, SQuAD 2.0- so the reason why these- these numbers are- these ranges are lower because I took the screenshot, uh, after, like, these- these, you know, significantly after, ah, when a bunch of other people had submitted systems. But, um, so this is a question answering data set, so it would be like, what action did the US take that started the second oil shock? So in this case, there is no answer, right? So something you have to be able to predict is that there's no answer in this phase. So you have to be able to either predict the answer or say there's no answer. So BERT beat the- BERT beat the previous state of the art, but at the time that it was submitted by about six points, which was, you know, a pretty big gain. Now it's kind of gone past human level. But at the time, yeah it was- it was large. So I'll kind of do some ablation experiments or go through some ablation experiments. So this one, there's four things that I'm comparing here. So this is all- all BERT based sized models. So those are- the blue line is kinda the- the BERT based model. Um, the red line is if we take up the next sentence prediction. So in our case, even though people have subsequently said that they don't think it's important, in our case, we actually did measure it, and it turns out it seemed like it was important to have this next sentence prediction task, especially for, um, kind of question answering task, which is this one. But it- it seems to help a little bit at least in all four of them to- to have it. So this kinda doesn't- there's some thinking, learning some, uh, model that learns, uh, joint- that learns the relationships between sentences. So then this one is the one that makes an apples to apples comparison between OpenAI, and OpenAI is GPT 1, and Bert, right? Because I also- I made the model bigger but not- not BERT based. So BERT base was the exact same size, but it was trained on more data. So to make it a fair comparison, I basically retrained my implementation of OpenAI's GPT 1 and I, ah, which is- which is this yellow line. And we can see that on- on some of the tasks it's not that far, although this is actually a pretty big gap, like this- this drop is, you know, like four points which is- which is a lot. But on- on some tests like SQuAD, and- and MRPC, it was- it was way worse. Um, and so, uh, for SQuAD to make sense because SQuAD is a labeling- is a span labeling task. And so if you only have left context then words at the beginning have basically no context. And so you're- you're asking it to do span labeling on words with- with almost no context. So it really doesn't make any sense. And so of course it's gonna do much worse. So then we also added a- to make it fair, we also added an LSTM on top of it which was trained from scratch. And this does help a little bit on some of the tasks, but on other ones, it doesn't help. So like on SQuAD it helps because now you have bidirectional context, but on MRPC, because it's a very small task, it's only got 3,000 labeled examples. It doesn't help at all. So it- this just show that the kind of the masked language model and the- and the next sentence prediction are both important, especially the masked language model. So the other thing, uh, one of the other ablations is that when we- when we apply the, the, the math language model, we're only predicting 15% of words in the sentence. But when you do a left to right language model, you're predicting every single word conditioning on all the words to the left. So one question might be, how much, does this make it take longer to converge, even though eventually we know that it converges at a much better point. If you have a limited training budget, is it better to do a left to right model? And so, uh, we see that [NOISE] when you do this masked language model, the bi-directionality starts to improve- like at the very, very beginning because you're doing so many more predictions, it's true that, uh, the left-right model does do better at the very- at the- like for, at epoch one. But then very soon after because the bi-directionality is so important, uh, it starts to take over. And so it's- it's basically better from almost the start to do bi-directionality, and then it- it takes longer to converge but the- but the overall convergence is of course much higher. Um, and then finally, for- for, uh, this, uh, ablations, um, we can see that going from a smaller model, which was 100 million to 300 million parameters helps a lot, which isn't surprising. But the more surprising thing is that one of these curves, these aren't- these aren't comparable- uh, um. You shouldn't compare the curves to each other. The point is to look at the curves as a function of, uh, the- the number of parameters. And see that this one is- um, this one only has 3,000 labeled examples, and this one has, uh, 400,000 labeled examples. So in both cases, the curves look very similar, which is surprising because, you know, the rule of thumb that you're gonna overfit your data, if you only have a few num- a few la- labeled examples turns out not to really be true anymore. And there's- um, you know, and these- and these curves keep going up, right? So now with subsequent papers which we'll talk about, the- like, this- this big one was 300 million parameters. People have gone up to 11 billion parameters and still seeing similar behaviors. Still seeing the curves go way up and gotten state of the art results, which is kind of crazy because now we know that, um, you know, basically that- there's al- almost no limit. So another thing I wanna talk about before I talk about, uh, stuff that's happened since BERT, um, is the kind of is-. Even though BERT, um, itself was in some ways very simple, which is, you know, not a bad thing. Um, it wa- it was very successful immediately. And, you know, part- part of that is, like, the Google brand, and like, you know, got a cute name, and stuff like that. But I think that I- I- I spent a lot of thought and time with the open source release, and in particular, looking at other open source releases, and figuring out what people didn't like about those, um. And so I think this is important, like, when you're- uh, [NOISE] whe- when your [inaudible] , even- even working industry as a- and trying to release something. So, um, I kind of just listed the things here that I thought were, uh, important for, like, why- why it was, uh, successful compared to other things. Um, so like, I'm not- I'm not trying to call them out just to be mean, because the- the- but like the OpenAI GPT-1 release was really- uh, was not very good. And- and the- then, like, they're aware of this, because the- the OpenAI GPT-2 release was very good. Um, and so, uh, yeah, because it was- it was very hard to run in the- in the- it was not commented, the TensorFlow code was very- um, like, it worked fine, like, I replicated it. But, like, the- the TensorFlow code was very non-idiomatic. It used all sorts of weird stuff. The Python code was weird, there was no comments, there was basically no instructions. So like- um, and then other codebases also are- uh, are kind of too big. It's like people just want it- like, say like, we wanna have one unified codebase for our entire, you know, language team? And so, they just put this stuff up as part of that, and people don't really like that either. So I was very insistent that we do a minimal release. So, like, this, we're just gonna release BERT, it's not gonna be part of anything. There's not going to be any external dependencies, um, and it's gonna be, like, very well commented. I think that people- and it was kind of also easy to drop in, just the modeling part, and just the tokenization part, and- and just the- uh, the front end which runs like the training loop, and kind of separate all these out, because, um, that way- and so-. I think because of that, um, people kind of started using it much quicker, um. And of course, like, all the publicity helped. But, uh, I- I think that, uh, you know, it could have easily been not as successful if- if it had been, you know, done in a different way, so, uh. It's just kind of advice. So, um, yeah. So now I'm gonna talk about five models that have come out since BERT that have all improved on top of BERT in various ways. There's been more than five, but I'm going to highlight these five. I think that they're, um, interesting. Uh, a lot of them didn't come from Google, but it's not because, necessa- well, a lot of them involved Google. I would say many of them actually were not- they- they were, uh, interns at Google from various universities, who were supervised by Google researchers and also used Google Compute. I mean, the reason why a lot of them came from Google is because, like, frankly, like, other than Facebook, Google, and Microsoft, there's not really many, uh, like, people that can- companies that have the- the resources to train these huge state of the art models. And so almost by necessity it's gonna- it's gonna come from one of these- um, one of these labs. So the first one was RoBERTa. And so this is probably the one that- that had like the least, uh, uh, kind of new stuff, it was- it's really just- and it's just-. And so this was University of Washington and Facebook. It came out not that long after BERT. And so, what they showed was that BERT was really under trained. And so, basically they took- even on the same amount of data which- which was- even though I did 40 epochs on the data, if you do it for, like, 200 epochs, you get even better results, like, significantly. Um, so they- uh, so basically, they trained more- more epochs on the same da- data. And they also showed that more data helps, which is also not super surprising. Uh, and they did improved masking and pre-training using a couple of- a couple of tweaks to that, and they- and they were able to get, uh, state of the art results, which is cool. And so, yeah, but that was- that was a pretty straightforward paper, um. So the next one is XLNet, which was done by, uh, some interns at CMU when they were at Google Brain. And so this actually had some- some really cool, uh, uh, changes. So one of them was- they used this Transformer-XL, which was actually the precursor done by the same people that, uh, were- they were just doing language modeling tasks instead of whpre-training. But the big- one of the big, uh, innovations of Transformer-XL is this idea of relative position embeddings. And so with absolute position embeddings, the problem is that every word gets, like, this is word four, this is word five, this is word six. And so they are embedding, so they do generalize, but in- in practice there's a quadratic number of relationships, like, how does word 83 relate to word 76, right? That's- that's- that's like- and then once you get bigger, like, 500 or 1,000, now you have 1,000 squared total relationships, like, you have to say, how does word 997 relate to whatever, right? And so that's obviously not optimal once you get to a large size. Um, and so with- with, uh, relative position embeddings, you basically can say, uh, how much does dog attend to hot? And how much shou- should the word dog attend to the previous word? And then you get- and these are nonlinear at firs-. So these are linear at first, but then you combine them and then you get a nonlinear, uh, contextual representation. Then you do this in many- many layers, and this ends up being- so then you say, how much does this contextual representation of dog? How much should that attend to the previous word? Uh, and then you kind of get- can build up. And so this generalizes much better for long sequences. So it's a cool, uh, innovation. And then the other one, which is specific to pre-training and not just the- the model itself, is the idea of permutation language modeling. So this is a little bit hard to explain. I think the paper, uh, explained it very formally, I guess. And so- um, but basically, there's a trick where- so in the left to right language model, every words done predicting is- is based on the word to the left, right? But imagine that instead of- instead of predicting all- every word of the- you can basically take any permutations. So it's like I'm gonna predict the first word, then I'm gonna pick the- the third word, then the second word, then the fourth word. And so, um, that's a totally valid way and you still get a well-formed probability distribution because it's still predicting one word at a time, given some permutation of the input. And with transformers and with attention, you can actually do this very efficiently, just by masking out your attention probabilities. And so, every single sentence you have, you can kind of sample a single permutation of this. And you can- uh, now- now you can effectively train a bidirectional model, because of this word. It won't be conditioned on every- still on average, every word will only be conditioned on half the words. But this word will be conditioned on- you know, all these words to the left and all these words to the right, and maybe it'll be missing these words, but that's fine. And so you get much better sample efficiency. So I thought this was- this was a really clever idea, uh. And so- and this was kinda like the- the main innovation of XLNet. And so, um, yeah, they basically get better sample efficiency because they're able to- uh, to do this random permutation and- and kind of take advantage of this. So this won't work with LSTMs because- uh, because of this ordering, but because the way that masking is done in transformers, it's just- um, it's just a- it's just a mask on the attention. So it actually ends up working very well. Uh, and so they also got- so yeah, the- the numbers compared to RoBERTa, they actually ended up being pretty similar, um, but a lot of these things are hard to compare because people change the dataset and change the- um, change the size of the model. So it's hard to compare apples to apples. But these two techniques ended up being pretty similar, but I think, you know, XLNet had more innovations in terms of, uh, technique. Um, so ALBERT, it's called a Lite BERT for, uh, self-supervised learning. And so this also had a couple of cool innovations. And so the idea here is really massive parameter sharing, with the idea being that, if you share parameters, you're not gonna get a better language model, but you're gonna get better sample efficiency. You're gonna get less over-fitting when you fine tune, right? Because if you have a billion parameters, and you fine tune them on a 300- on a- dataset that was like 1,000 label examples, you're still gonna over-fit very quickly, right? But if you- if you have a much smaller number of parameters, you're gonna get less over-fitting. So if we get a similarly powerful model with fewer parameters, you're gonna get less over-fitting. And so they- uh, so the- so the two major innovations where- so- instead of using a word em- because the word embedding table is big, right? Because it's the size of your- of your vocabulary, the number of, uh, word pieces times the hidden size. And so it's gonna be much bigger than hidden- the hidden layer. So first thing is that they used a factorized embedding table. So if they- if they had a hidden size of 1,000, they only, um, [NOISE] used like 128 dimensional input embedding, and then they projected that to 1,000, using a- using a matrix. And so, instead of having 1024 by 100,000, they would have 128 by 100,00 plus 1024 times 128. And you multiply these together- and multiply the two matrices together, and then effectively you have a 1024 by 100,000 embedding matrix, um, but you have much fewer parameters. That's- you're doing parameter tying. But not- well, this isn't parameter tying, but you're doing parameter reduction in- in a clever way. The other one is cross-layer parameter sharing. So this is similar- uh, this is simple, and it was also- it's- it's been done in previous papers, es- uh, especially, um, universal transformer. And the idea is that you- you have a bunch of transformer layers, but all- let's say if you have 12 layers, all 12 layers should share the same parameters, right? And so, uh, that ends up- so- so now you can have, uh, a mu- a much bigger model that has fewer parameters than- than BERT has, and so you get less over-fitting. And so, um, they got state of the art compared to XLNet and RoBERTa. But one important thing to keep in mind is that ALBERT is light in terms of parameters, not in terms of speed. So for a- um, for the mixed- for the model that's- uh, that's actually comparable to- to BERT, um, they- they actually did slightly, like this- like this model and this model were, uh, about the same, but this one was actually slower. So it's only when they started making models that were much bigger in terms of compute than BERT, but doing more parameter tying, then they started getting good results. And so the- the implication of this is that, like, uh, you can- you can reduce the number of parameters, but still, um, [NOISE] nobody has figured out how to reduce the amount of pre-training compute that- which required, which is, you know, kind of unfortunate. So, uh, the next one is T5, which is, uh, exploring the limits of transfer learning with unified text-to-text transformer. So this was a paper by Google Brain and- and other groups in Google, uh, where they used just- they used a lot of compute. And they- and they did tons of ablation on, uh, pre-training. They didn't like- they didn't- their- their goal wasn't to come up some- with some super clever new pre-training technique, right? It's really just that they carefully ablate every aspect. How much does model size matter? How much does training data matter? How much does cleanness of data matter? Like, how much does the exact way that you do the pre-training objective matter? Like, how- doing the masking? Like, how many spans do you mask? And so they wanted to kinda very clearly, um. Uh, do the- and they also wanted to like, push the limits of size and say, what happens if we have 300 million, a billion, 10 billion parameters, right? And- and then- so- so they did tons and tons of ablation and they got state of the art and everything and they're still state of the art in everything. And the results, they were a little bit bleak, uh, in the sense that [LAUGHTER] nothing really mattered except making the data- like- like all of the ablations- it wasn't like, oh, you know, BERT did everything perfectly. It was that- it doesn't matter. Like, you could do 20%, 25%. You can do this fine tuning recipe, this fine tuning recipe. It's like all that really matters is making them the model bigger and training it more data and clean data. Uh, and so, um, yeah, it's a- it's a little bit of a- of a- of oblique paper. If- if you ha- are hoping that there is- exists some pre-training technique which is super computationally efficient and all, so you can get, you know, very impressive results, which I'm not saying there isn't, but like most of this evidence points to not. So the one kind of, um, newest paper that is maybe the most positive in this direction is this, uh, paper called ELECTRA. Uh, and so, uh, and so this was done by, uh, Kevin Clark, from here, and, uh, and Google Brain. And so, yeah. And this one, it's- it's a pretty clever idea. So basically their idea is, instead of training- instead of training to generate the output, you just train it as a- as a discriminator. And so you have a local language model, you have- you do some masking, you have a local language model which replaces it and then you train it to discriminate whether it's the original one or not. And so the idea here is that you are, um, doing a much- you- you're- you're doing- you're getting a better sample efficiency for pre-training because you're predicting every, uh, every word which is actually, I mean, I- I don't know definitely why it would be that different from- from- from- from- BERT still, uh, in terms of- because you don't replace it with- with the mask with whatever, you also randomly encrypt it. But the- but the- the bigger- the biggest difference is that, um, is that these are kind of contextually replaced. So it's like, when BERT, when I did- done the masking and replace with the random word, it was truly a random word. So most of the time it was completely trivial to tell that this was not the right word. You don't necessarily know which word should be replaced, but in this case, they actually used a intentionally weak, but still non-trivial language model to predict which word. So like this locally makes sense, the chef ate the meal, but it doesn't make any sense, like a very strong model will not predict this, right? So- so that's the idea that you use a weak model to- to- to- to do the substitution, then you train a strong model to- to, um, do this. So these results are, and this is a big table, but these results are- they are certainly positive with regard to, uh, previous results in terms of compute versus, um- so like for if we- if we compared this row, so uh, which is one-tenth the compute of BERT-Large to BERT-Base, which is also one-tenth the compute of BERT-Large. It really does a lot better than BERT-Base. But when they, uh- if you- but in terms of state of the art models, um, when they do, you know, the same amount of, uh, compute as their BERT-Large, which is this one. Or compared to other state of the art models, they're not- in order to get state of the art, or to get similar to state the art, they basically need to do as much compute as state of the art. So like 44x, 5.4x. So I mean, at scaled down values they were able to do better, but this is a- still a pretty big gap, like four points. Um, so it's- it's positive, but it's not- it's certainly not like the silver bullet in terms of, uh, showing that, uh, we can, you know, pre-trained models much better for cheaper. So- but- so the last thing I wanna talk about is how we actually serve these models, right? Because, you know, I've said that like, they're incredibly expensive to train, and nobody has been able to figure how to make that faster, but, you know, they're being used all over the place, right? So like, uh, you know, there's new stories. Google has improved 10% of searches by language understanding, ''Say hello to BERT,'' and then Bing says it has been applying BERT since April. So- and so this is live in Google Search and Bing search, and so these are like really low latency services, right? That have like, a few milliseconds of- of latency, and they serve, you know, billions of- of queries a day. So how are- how are they doing this? Is it just like, uh, that you know, Google and Microsoft are sp- are spending billions of dollars on hardware, which they are, but not- not just for this, right? And so like, uh, like it would- it would cost billions of dollars just to serve this if youwould actually be serving BERT, but we- we're serving, uh, not. Instead of reusing- reusing model distillation, right? So this has been around for awhile. Um, so it's cal- you call it distillation or model compression. One of the first papers was this model compression paper, um, that was- that was done for, uh, I forgot exactly what task, but then- and then Hinton's paper Distilling Knowledge in a Neutral Network is a more more well-known version- not that version, but a more well-known, uh, paper on- on distillation. But in- in reality, the one- the version that- that we use at Google, and the version that most people use when they say model distillation for, uh, pre-trained language models, it's a, um, it's a very simple technique, but it's easy to misinterpret wh- wha- what we mean. So what we do is we pre-train- we train the state of art model, whichever the ones the mo- we can most afford to train, right? Because of course we can just make it bigger, but we- we set some budget of, you know we want to train it for a day on some number of GPUs. And then we fine tune it, right? So we get a model that's the maximum accuracy, and that's our teacher model, and this is expensive. Then we have a la- a large amount of unlabeled input, which is typically for- for most industry- industry applications, you have unlabeled input because you have you know, in search, you have, this is what they use the search for, this is what they clicked on, that's how search engines are trained. And so you can then just take these and you, um, and then you just label your examples with them. So you can get billions of these, uh, If you actually want a real service, and then you sort of- then you- then you run these, you know, query answer pairs through your teacher, and you get a pseudo label, and you just train a much smaller model, much meaning like 50 times, 100 times smaller to, uh, predict your student- your- your teacher outputs. And so- and you can generally do this for most techniques. I mean, for most tasks you can do this, uh, pretty easily and get a huge 50-100x, uh, compression with no degradation, but the important thing to realize is that we're not compressing the pre-trained model itself. We haven't really had any luck doing that. So like you can't actually just take BERT, and then compress it to a smaller model, which you can then fine tune for all these other tasks. It's only after you've chosen the task and after you fine tune it for this task, that you- we're able to do it. Um, so to show some specific results. So let's say we have- let's say we have a BERT-Large teacher. This is an Amazon book reviews score. So this is a paper that- that I forgot to cite it, but this was a paper that my group, uh, published [inaudible] wrote. And so, um, this has 50,000 labeled examples and 8 million, uh, unlabeled examples. So you- you- you fine tune on, you pre-train BERT-Large, normal, or you you take a pre-trained BERT-Large, you, uh, you fine tune it on these 50,000 examples, you get this 88% accuracy right? Then, uh, and so- and, but then now, let's- let's say instead of using BERT-Large, you used a much smaller version. So this one is a quarter of the size, this one is, you know, uh, 16th of the size, whatever, this one is 100th the size, right? So this, this row that's 100th the size. If you were to just train it, if you would have pre-trained this on the same Wikipedia book corpus just like BERT, and then fine tune it, you would get 82% accuracy, which is, you know, a lot worse, 6%, 6%, like 66 absolute worse, which is quite a big drop, right? But then if you were to take this 88% teacher, label it with eight million examples, which are of course held out, this is test- this is testaccuracy, um, and then- uh, and then train this classification model which says this is a good or bad review on these 8 million examples, you can take this model, it's 100 times smaller, and get the same accuracy as the teacher, right? You get the same 88% accuracy. So that's really the, uh, the- the cool thing with distillation, is that you can get models that are much smaller, but you still need to train the big model in the first place. So it doesn't help the training costs. It just helped- it actually works because then you can use this big model to train- to label millions or billions of examples. So it ends up being more expensive than just training BERT, but- but you can actually serve this model at- at inference time for- for a tiny cost. So the question is like why does distillation work so well? Um, so the big hypothesis is that, language modeling is kind of the ultimate NLP task, right? A perfect language model is also a perfect question answering system, a perfect entailment system, sentiment analysis, co-reference, etc. Because in order to be able to- to do these things, you kind of have to be able- you could construct it as a language model. So when you're training a massive language model, you are learning many millions of latent features which can, which are effectively the same features that you need for any other task. And so when you're doing a simpler- a fine tuning of a more specific task, what's- the fine tuning is basically taking these- these latent features which your system happened to learn, and some- encode it somewhere in your weights. And you are- it is kind of just tweaking these, which is why I could do it with a single pass over the fine-tuning data. Um, and so- but once you've figured out which parts are important, then there exists a hypothetically much smaller model size which can still get the same representation and same generalization, right? So then you label a bunch of examples with this fine tuned model, and now you can learn a model that can really hone in on just these features that are important. And so it can- it can- you know, it can train a model that is 100th the size and just hone in on these features if you have a lot of- of pseudo labeled data, uh, and- and that's why it works. And so the evidence really is that it just doesn't work to do self distillation, right? And so it must be that it's really just learning a subset of the features for- for most of these tasks. Um, and so basically every task about language modeling we've been able to- to get distillation to work for. So this includes tasks that seem like really hard, like question answering and search, um, so that does imply that- that- that language modeling itself and predicting which is basically language generation also because that's just a form of language modeling, uh, is fundamentally harder than language understanding, which is not, uh, super hard to buy, or at least maybe it's not fundamentally harder, but given the state of the art, state of the art models for language understanding are fundamentally simpler in what they do, right? So presumably they're just doing this kind of pattern recognition than models that are generating language. Um, and so that's kind of why all of these classification models can- can kind of be distilled so well. Um, so basically in conclusion, uh, [NOISE] the pre-trained models work really well. They're very expensive. We know how to, um, kind of solve this for inference time, and we can do fast inference, but it- it is still unsolved. How to make these fast, ah, at training time. And moreover, uh, it seems like a lot of the details about algorithmic improvements for- for making the training more efficient, um, don't seem to have, er, a ton of, of benefit in terms of at least getting to see their results. Um, and seems like a lot of choices don't really matter that much. And it's really just about, you know, a- a- a couple of, like- like, uh, competitors to the kind of the simple, masked language baseline. It's- it's pretty hard to beat that in- in an apples to apples comparison. Um, so yeah, it's a little bit, uh, I mean, it's a little bit unfortunate for- for from a research perspective. It's definitely good from, uh, from people who- who want to build NLP systems and who want to, officially domain specific NLP systems, like people who wanna, you know, adapt to a medical domain, or people- where you only have a tiny amount of data, or people who wanna do startups so they wanna, you know, build an actual product and they only have a tiny amount of data. So it's definitely good from that perspective, but it's certainly, I think for- ah, from the perspective of sometimes, you know, uh, research, like- as I was saying, the goal of research is to kind of like research yourself out of a job, then it is kind of, uh, you know, it's- it's a little unfortunate from that perspective, um, but, you know, I still think that there's- there's- there's a possibility that there's gonna be a breakthrough that kind of shows how to do computational efficiency, um, without a kind of show compelling results that you don't need, you know, such an absurdly large model or actually, the size of model doesn't matter. You don't need such an absurdly expensive model, uh, to- to- to do well. Um, maybe it'll come from sparsity, right? Or something like that where you actually do have a really large model. It's just- [NOISE] it's just sparsely activated in some- using some efficiency tricks or whatever.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_19_Bias_in_AI.txt
Okay. Hi everyone, uh, let's get started. Um, so Chris is traveling this week so he's not here. But I'm very excited to say that today we've got Margaret Mitchell who is a Senior Research Scientist at Google AI. She's going to tell us about, uh, the latest work defining and understanding and improving the situation with bias in artificial intelligence. Uh, Margaret has a background working in NLP and deep learning, so I'm really interested to hear what she has to say today. Take it away. Great, thank you. Um, can you guys hear me okay? I'm not sure if this mic is exactly picking up my voice, everything's cool? Okay, cool. Um, so this work is, uh, the product of a ton of different people and collaborators that I've tried to put up here. Um, some students at Stanford also Johns Hopkins, Google, Facebook and Microsoft are all represented, cool. So, um, for those of you who haven't seen the set of slides before, what do you see here? Just shout it out. Bananas. Bananas. Okay what else? Stickers. Stickers. What else? [NOISE] Shelves. What else? Bunches of bananas. Bunches of bananas. What else? Yellow, ripe bananas. You said ripe bananas, good. So you can add [LAUGHTER] bananas with stickers on them. You can start doing, like, embedded clauses, you know, bunches of bananas with stickers on them on shelves in the store to get, kinda, crazy. But we don't tend to say yellow bananas, right? So given something like this, we might say green bananas or we might say unripe bananas. Given an image like this we might say ripe bananas or, uh, bananas with spots on them. Uh, if you're me, you might say bananas that are good for banana bread. Um, but given an image like this or something like this in the real world, we tend not to mention the yellowness. And the reason for this is because yellow is prototypical for bananas. So the idea of prototypes, uh, stems from prototype theory which goes back to the early '70s, uh, coming out of the work of Eleanor Rosch and colleagues. Um, and it's this idea that there are some stored central prototypical notions of objects, um, that we access as we're operating, uh, throughout the world. There's some disagreement about whether these prototypes are actual exemplars of objects or something like a distribution over what's likely, but there is general agreement that we do have some, sort of, sense of what's typical and what's a typical of the things in the world and we tend to notice and talk about the things that are atypical. Um, so this is a riddle that I heard in middle school that worked a little bit more at that time, um, some of you might have heard it before. A man and his son are in a terrible accident and are rushed to the hospital in critical care. The doctor looks at the boy and exclaims, "I can't operate on this boy, he's my son," How could this be? [NOISE]. Two dads? Two dads or he has a mum who's a doctor, right. Otherwise known as a female doctor, which might be contracted- contrasted with doctor. Um, in a study they did, uh, when they first, sort of, put forward this riddle at Boston University, they found that the majority of test subjects overlooked the possibility that the doctor could be a she. And that included men, women and self-described feminists. So the point is that, these, kinds of, uh, ways of talking about things and assumptions that we make, aren't necessarily something that speaks to a negative intent, but something that speaks to how we actually store representations in our minds and how we access those representations as we interact, uh, in the world. So this, uh, this affects what we can learn when we're learning from text. So, um, this is work from 2013, where they took a look at what was, sort of, most likely, what would you learn if you were just learning from raw text, um, what were some things that were common in the world? Um, they found that in this setup something like murdering was ten times more likely than blinking. And the reason for this is because people tend not to mention these typical things that go without saying. We don't tend to mention things like blinking and breathing, but we do mention atypical events like murder and that affects the, kind of, things a machine can learn from texts that we put out in the world, because it's been subject to all of these filtering processes that we have as humans before we, uh, communicate. Um, this issue in particular is known as Human Reporting Bias. Which is that the frequency with which people write about actions, outcomes or properties, is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals, but says a lot more about how we're actually processing the world and what we think is remarkable. So this affects everything a system can learn. Um, in a typical machine learning paradigm, one of the first steps is to collect and potentially annotate training data. From there a model can be trained, uh, from there, uh, media can be filtered rank- ranked, aggregated, generated in some way, um, and from there people see the output. And we like to think of this as a relatively straightforward pipeline, um, but at the very start, uh, even before we're collecting the data, actually within the data itself, are a host of different kinds of human biases. So things like stereotyping, things like prejudice, things like racism and that's embedded within the data before we collect it. Then as we collect and annotate data, further biases become introduced. So things like sampling errors, confirmation bias, um, uh, in-group bias and out-group bias and I'll talk about these, um, a little bit. Oh, and I should mention feel free to ask questions as I go, um, totally fine to just, kind of, interact, uh, throughout. So here are some of the biases that I think are relatively important for work in AI and machine learning. There's hundreds you can go into, um, but some of the ones that I've, sort of, become the most aware of working in this space, um, are these sets and I'll go through each of these a bit. Um, so I talked about reporting bias earlier, which is, uh, which affects what we can learn from text. Um, another example of a kind of bias that really affects what we can learn from text is selection bias. So, uh, a lot of times that we, a lot of times when we get data annotated we'd use something like Amazon's Mechanical Turk, um, and the distribution of workers across the world is not even, sort of, uniform distribution, it's actually, um, concentrated in India, the US and then some in Europe. So this leaves out South America, this leaves out Africa, this leaves out a lot of China and that affects the, kind of, things that we'll be able to learn about the world when we have things annotated. Um, another kind of bias is Out-group Homogeneity Bias, which is the tendency to see out-group members as more alike than in-group members. And this is gonna affect what people are able to describe and talk about when they're annotating things such as emotion. So, uh, so for example we have these two, like, adorable puppies on the left here and they're looking at these four cats. Um, these are all different black cats, very different in different ways, but the two puppies look at the cats and they see four cats basically the same. And it's kind of trivial to understand how that also extends to human cognition and how we also process people. Um, it's this- it's the sense we have that the, the cohort that we're in, the people that we interact with, those are the kinds of people that are nuanced and everybody else is somehow less nuanced, has less detail to them. It's a trick our minds play on us in order to help us process the world, but it affects how we talk about it and it affects further how we annotate it. Um, this leads to stuff like biased data representations. So it's possible that you have an appropriate amount of data for every possible human group you can think of in your data, um, but it might be the case that some groups are represented less positively than others. And if we have time I'll go into, uh, a long- a longer example of that. Um, it also leads to things like biased labels. So, um, this is an issue that came up when we were getting some annotations for Inclusive Images competition, asking people to annotate things like bride and wedding and groom. And we found that given three different kinds of bride, wedding and groom images, um, ones that were more Western, European American, uh, got the appropriate labels and ones that weren't, just got, sort of, more generic person, kinds of, labels, uh, not able to actually tease out what's actually happening in these images. Compounding this issue are biases in interpretation when the model outputs, uh, its decisions. So, um, one, one issue is confirmation bias, which is the tendency to search for, interpret, favor, recall information in a way that confirms preexisting beliefs. And so a lot of times when we, uh, build end-to-end systems and try and test our hypotheses, we're kind of just testing it towards, uh, things that we want to be true and analyzing the results in a way that will, uh, help confirm what we want to be true. Um, overgeneralization, which is coming to a conclusion based on information that's too general or not specific enough. Um, this is an issue that happens a lot of times in the analysis of deep learning model results um, where it's assumed that there's, there's some kind of general, uh, conclusion that can be taken away when really it's actually just, uh, an effect of really skewed data. Um, this is also closely related to overfitting which is kind of the machine learning version of overgeneralization, which is where you're still making predictions and outcomes, but it's based on a small set of possible features, um, so it's not actually capturing the space of the correct features for the outcome, uh, the desired output prediction correctly. Um, there's also a correlation fallacy, which is confusing correlation with causation. And this happens a lot again in talking about what machine learning models are learning and deep learning models are learning in particular, um, where just because things happen together, doesn't mean that one is causing the other, but, uh, models don't tell you anything- deep learning models directly don't tell you anything about the causal relations. And so it's easy to think that some output that is predicted based on a correlation is actually something that's causal, and I'll talk about some examples of this too. Um, a further issue is automation bias, and this really affects the machine learning models we put out there in the world that then get used by people in systems like justice systems. Um, so that's the tendency to, um, favor the suggestions of automatic predictions of models that output predictions over the, um, uh, over the different kinds of um, suggestions of another human. Um, and this happens even in the face of contradictory evidence. So, if a system is telling you, you know, "This, this is the score or this is the risk of this individual", then we're more likely to think it's true because it came out of a mathematical system, and we automatically sort of see this as something more objective, something more mathematical, that something's going to be more true than humans some- somehow. Um, and that's automation bias. So, um, rather than this kind of clean straightforward pipeline that we have in machine learning, um, we have human bias coming in at the very start in the data, um, and then human bias coming in in data collection, annotation, and then further getting propagated through the system as we train on that data, um, as we start putting outputs based on that data, as people act on that data. And this creates a feedback loop where the kinds of things that we output for people to act on, um, are then, are then, then serves as further training data for input into your system, so you end up amplifying even further these different kinds of implicit biases. This is known as a Bias Network Effect or Bias "Laundering", I like to call it. And so, the message is that human data perpetuates human biases. And then as as machine learning or deep learning learns from human data, the results is a bias network effect. So, I want to steer clear of the idea that if I say bias or someone says bias that equals bad, it's a little bit more nuanced than that. Um, so there are all kinds of things that people mean when they're talking about bias, um, and even the same bias can be good in some situations and bad in some situations, so bias in statistics and ML. Um, we, we talk about the bias of an estimator which is the difference between the predictions and the truth, the ground truth. Uh, we talk about the bias term in linear regression. Um, we also have cognitive biases, and I talked about that in the beginning, and not all of those are negative or, or have to be, uh, or have to be seen as negative. So optimism is another kind of bias that we can have that affects our worldview and the way we sort of process things. Um, and even things like recency bias and confirmation bias are just ways that our minds can like, um, handle the combinatorial explosion of all the different things that can be true in the world and put it down to something tractable that we can sort of operate with in the real world. Um, so algorithmic bias is what a lot of people mean in headlines and whatnot when we're talking about bias, which is, uh, more about unjust, unfair or prejudicial treatment of people that's an output of, a automated decision system. Um, and the focus here is really on, uh, unjust, unfair or prejudicial treatment of people. So, a lot of the work in this space right now is focusing on trying to understand, what does it mean to be unjust from an algorithm, what does it mean to be unfair from an algorithm, and how can we handle this, how can we sort of mitigate these issues in order to be able to keep developing technology that's useful for people without worsening social divides. Um, and I felt the Guardian put it really well a few years ago. Um, they said, "Although neural networks might be said to write their own programs, they do so towards goals set by humans using data collected for human purposes. If the data is skewed, even by accident, the computers will amplify injustice." And it really keyed in on this amplify injustice idea. Um, and let's talk about what that can mean. So, one of the avenues of deep learning research that's taken off in the past few years is predicting criminal behavior. Um, so, um, how many of you are familiar with Predictive Policing? [NOISE] Okay, like, half of the class. Okay. So, in predictive policing, algorithms, um, predict areas to deploy officers where crime is considered to be likely to occur. But the data that the- the- the models are trained off of is based on where police officers have already gone and made arrests. So, the systems are simply learning the patterns of bias that humans have and where do they go and where they are trying to decide to def- uh, to find crime, um, and then reflecting them back. So, because this system hones in on some of the top spots where people have been arrested, notice that's not the same of- that's the same thing as where crimes have been committed, right? It's where arrests have been made. Um, it means that the other areas that might be explored for crime don't get explored at all. That worsens the situation. Um, some neighborhoods, uh, get really acutely focused attention on them, and that heightens the chances of serious repercussions for even minor infractions, that means arrests. And that means a feedback loop of data that you will get an arrest in this place if you go there. Um, another, uh, sort of related issue in this space is, uh, predictive sentencing. Um, so there was a really nice article that came out from Pro- ProPublica a few years ago discussing this. Um, but when most defendants are booked in jail, they respond to a questionnaire called COMPAS. Um, and their answers are fed into this software system that generates scores that correspond to the risk of recidivism, that's the risk of um, er, making a crime again. Um, and the questions are used to gather data on the defendant's socio-economic status, family background, neighborhood crime, employment status, and other factors in order to reach some predictim- prediction of an individual's crime or criminal risk. Um, but what ends up happening is that it ends up focusing on the key bias issues that humans have and propagating it back with something that looks like an objective score. So, you're a lot more likely um, to be convicted of a crime, um, if you're black than if you're white, even if you've made the exact same crime. And the system will pick up on this, and will reflect this back to say that people who are black are more likely to have reci- like recidivism, more likely to convict a, uh, to make a crime again. Um, so this is an example of automation bias, preferring the output of a system, uh, in the face of overgeneralization, feedback loops, and correlation fallacy, confusing things that are occurring together as being somehow causal. There's another, uh, sort of area of research and, uh, startups looking at predicting criminality in particular from things like face images. So there's a company out there, uh, called Faception. They are based in Israel and they claim to be able to, um, use individual images, uh, with computer vision and machine learning technology for profiling people and revealing their personality based only on their facial image, um, recognizing things like high IQ, white-collar offender, pedophile, and terrorist. Um, and their main clients are Homeland Security, lots of other, uh, lots of other countries dealing with sort of public safety issues. They've not published any details about their methods, their sources of training data, or their quantitative results. We know that in light of automation bias, people will tend to think it just works even when it doesn't work well. Um, but there was a paper that came out wi- in a similar line predicting criminal, criminality, or purporting to predict criminality from individual face images, and that one had some results and, uh, some more details about the data that we could kinda dig into to understand where are these kinds of claims coming from. Um, so this was an article that was posted on Archive near the end of 2016. Um, and they said they were using less than 2,000 closely cropped images of faces, um, including wanted suspect ID pictures from specific regions, and they claimed that even based on this very small training dataset, um, that they were able to predict, uh, whether or not someone was likely to be a criminal, uh, greater than 90 percent accuracy. Um, and they got so lost in this, this idea that, uh, it's sort of funny to read to just take a step back and realize what's actually happening. So for example, one of their really great exciting claims was that the angle Theta from nose tip to two mouth corners is on average 19.6 percent smaller for criminals than for non-criminals. This is otherwise known as smiling. [LAUGHTER] Uh, and [LAUGHTER] you know, exactly the kind of images people would use when trying to put out wanted criminal pictures, probably not really happy pictures. But you get so lost in the confirmation bias. You get so lost in the correlation and the feedback loops that you end up overlooking these really obvious kinds of things. Um, so that's an example of selection bias, experimenter's bias, confirmation bias, correlation fallacy, and feedback loops all coming together to create a deep learning system that people think is scary and can do things that it can't actually do. Um, one of the issues with this was that the media loved it. Like it was all over the news, and there's been similar kinds of things happening again and again. Media wants to sell the story, and so it's part of our job as researchers, that people who work on this stuff, to be very clear about what the technology is actually doing, uh, and make a distinction between what you might think it's doing and what it's actually doing. Um, so another issue that has come up recently, um, it's claiming to be able to predict internal qualities but specifically ones that are subject to discrimination, um, and loss of opportunity. So in particular, there was this work that came out that claimed to be able to predict whether or not someone was homosexual, just based on single face images. Um, now, it's important to know that the images that they used in the study included images that were from dating websites where people self-identified as straight or gay, and identified as whether they were looking for a partner who was straight or gay, and these became the sources of the training data, and still from this, uh. Oh! Before I go on, can you guys just understand just from that what the issue might have been? Rainbows. [LAUGHTER] I don't think that there was actually anything about rainbows, but that's really unfortunate. [LAUGHTER]. [inaudible] Right. Yeah. So this has more to do with the presentation of the self, the presentation of the social self when you're trying to for example, attract a partner on a website, and less to do with how you look day to day. Um, and yet they kind of went to these large conclusions that aren't supported at all by the data or by their study, um, but things like consistent with a prenatal hormone theory of sexual orientation. Gay men and women tended to have gender atypical facial morphology. Now, none of the authors actually were prenatal hormone theory specialists, you know. They have doctor in their name so maybe that's a thing. Um, this was a Stanford professor and like I've, I've presented this a few times at Stanford and gotten into some like pretty harsh fights about this. So I'm ready if anyone wants to take me on. [LAUGHTER] But uh, me and my uh, some of my colleagues decided we'd, we'd play around with this a bit, and what we found was that a simple decision tree. Um, so I'm kind of assuming you guys know what a decision tree is. So, okay. Cool. So based on wearing makeup or wearing glasses, got us pretty close to the accuracy reported in the paper. That says nothing about internal hormones, that says nothing about any of that, and it says a lot about the physical presentation, the things that are on the surface. Um, it says a lot more about how people are presenting themselves than what is happening internally. Um, so the key thing that's recently kind of been overlooked is that deep learning is somehow, i- it's sort of considered that it's somehow magically going beyond surface level. But the point is that it's working on the surface level and working well. And in the face of confirmation bias and other kinds of bias factors, it's easy to assume that something else is happening that's not. Without critical examination, uh, for example simple baselines, uh, simple sanity checks, these kinds of things can just be ignored and, and not noticed at all. Um, so that's example of selection bias, um, experimenter's bias, and correlation fallacy. Okay. So now I'm going to talk to, talk about measuring algorithmic bias. So I just said a lot about different kinds of biases that come in in the data, in the collection, in the interpretation of the results. [NOISE] Let's talk about actually quantitatively measuring different kinds of biases. Um, so one of the key things that's, uh, emerged in a few different works and really ties nicely to a lot of fairness work is this idea of disaggregated evaluation. So in disaggregated evaluation, you evaluate across different subgroups as opposed to looking at one single score for your overall testing data set. Um, so, okay. You guys are probably familiar with the training testing data split. You kind of train on there, on your given training data, you test on your given testing data and then you repo- you report like precision, recall, F-score, things like that. Um, but what that masks is how well the system is actually working across different kinds of individuals and across different, different subgroups. Um, and so one just straightforward way to handle this is to actually evaluate with respect to those different subgroups. So creating for each sort of subgroup prediction pair. Um, so for an example, you might look at women face detection, men face detection, and look at how the, the error rates are, are different or are um, similar. Um, another important part of this is to look at things intersectionally, um, combining things, um, like gender and race at the same time and seeing how those, uh, how the error rates on those sorts of things change and how they're different across uh, different intersections. Um, and this is inspired by Kimberle Crenshaw. Um, who she, she pioneered intersectional research, uh, in critical race theory. Um, and she discussed the story of Emma DeGraffenreid, uh, who was a woman at General Motors, um, and she claimed that the company's hiring practices discriminated against black women. Um, but in her court opinion, the judges ruled that General Motors hired, um, many women for secretarial positions and many black people for factory roles, and thus they could not have discriminated against black women. What they failed to do was look at the intersection of the two and understand that the experience there might be fundamentally different than any of the experiences of either of these sort of subgroups in isolation. Um, and the same becomes true when you start looking at errors that are regularly made in deep learning systems. Um, so we've been able to uncover a lot of different kinds of unintended errors by looking not only at the disaggregated evaluation but also at intersectional disaggregated evaluation. Um, so I'm going to walk through a bit how this works. This is probably going to be review for most of you, but I think it's really important to understand this because it also ties to how we measure fairness and when we say like, uh, algorithmic fairness, what we're talking about. So um, the confusion matrix is a way, you guys. Okay. Are you guys familiar with the confusion matrix? [LAUGHTER]. I just want want to know where. Okay. Awesome. Cool. So you're familiar with the confusion matrix, right. So you have model predictions and references. Um, and you can kind of look at these as negative and positive, uh, binary classification, uh, kind of approach here where if the ground truth says something is true and the model predicts it's true, it's a true positive. If the ground truth says, uh, it's, it's, it's false, um, and the model predicts it's false, it's true negative. Um, and the errors that the kind of different issues that arise are false negatives and false positives. Um, so in false positives the, um, the ground truth says something is negative but the model predicts that it's positive. Uh, and then in false negatives, vice versa. Um, from these, you know, uh, basic kind of, uh, these basic breakdown of errors, you can get a few different metrics. Um, these metrics actually trivially map to a lot of different fairness criteria. So um, for example, if we're looking at something like a female versus male patient results and figuring out things like precision and recall, which is relatively common in NLP, um, if you have equal recall across your subgroups that's the same as the fairness criteria of equality of opportunity, um, I could work through the math. But I mean, this is basically just, just the main point that, that, uh, it says that given that something is true in the ground truth, the model should predict that it's true, uh, at equal rates across different subgroups. So this ends up being equivalent to having the same recall across different subgroups. Similarly, um, having the same precision across different subgroups is equivalent to a fairness criterion called predictive parity. And so as fairness has been defined again and again, um, it was originally some of these definitions came in 1966 following the Civil Rights Act of 1964. Um, they were reinvented a few times, uh, and most recently reinvented in, uh, 2016. Um, but they all sort of boiled down to this disaggregated comparison across subgroups and the math, the metrics end being roughly equivalent to what we get from the confusion matrix, specifically in classification systems. So which kind of fairness metric do you use, what are the different criteria you want to use to look at the differences across different subgroups, that really comes down to the trade-offs between false positives and false negatives. So this is the same problem that you are dealing with when you're just figuring out how to evaluate generally. Um, there's no one fairness criteria and that is the fairness criteria and to rule them all, um, deciding which one is better than the other is the same as kind of trying to decide which is better, precision or recall, right? It depends on what the problem is and what you're interested in measuring. Um, so a case where false positives might be better than false negatives and so you want to prioritize something like a false positive rate, ah, across subgroups is privacy and images. So here a false positive is something that doesn't need to be blurred gets blurred. That's just kind of a bummer. Um, but a false negative would be something that needs to be blurred is not blurred and that can be identity theft. It's a much more serious issue. And so it's important to prioritize the evaluation metrics that stress the false negative rates. Um, an example where false negatives might be better than false positives is in spam filtering. So a false-negative could be an e-mail that's spam not caught so you see it in your inbox, that's usually just annoying, it's not a big deal. Um, but a false positive here would be e-mail flagged as spam and then removed from your inbox, which, you know, if its from a friend or a loved one, it can be, it can be a loss, maybe a job offer something like that. All right. So, um, I just kind of covered how AI can unintentionally lead to unjust outcomes and some of the things to do or some of the things to be aware of here, are the lack of insight into sources of bias in the data, in the model, lack of insight into the feedback loops from the original data that's collected as an example of what humans do to the data that's then repurposed, re-used, acted on, and then further fed in. Um, a lack of careful disaggregated evaluation, looking at the disparities, the differences between different subgroups in order to understand this bias, this difference across the subgroups. Um, and then human biases in interpreting, and accepting, and talking about the results, which then kind of further the media cycles and the hype around AI right now. Um, but it's up to us to influence how AI evolves. So I like to think of this in terms of short term, middle term, and long-term objectives. So short term today, we might be working on some specific model where we're trying to find some local optimum, we have a task, we have data, something like that. And that's sort of short-term objectives. Um, we might have a slightly longer-term objective of getting a paper published, or if you're an industry like getting a product launched, whatever it might be. Um, from there we might see our next endpoint is getting an award or, you know, maybe become sort of famous for something for a few minutes, something like that and that's cool. Um, but there's a longer-term objective that we can work towards as well at the same time. And that's something like a positive outcome for humans in their environment. So instead of just kind of focusing on these local decisions, these local optima and these sort of local paper by paper-based approaches to solving problems, you can also kind of think about what's the long-term objective. Where does this get me as I trace out an evolutionary path for artificial intelligence, down the line in 10 years, 15 years, 20 years. Um, and one of the ways you can address this is by thinking, now how can the work I'm interested in now be best focused to help others? And that involves talking to experts, um, and kind of going outside your bubble, speaking across interdisciplinary fields like cognitive science which I've just talked a bit about. Um, so let's talk about some things we can do. So first off is data. Um, so a lot of the issues of bias and fairness, ah, in machine learning models really come down to the data. Unfortunately in machine learning and deep learning, working on data is really not seen as sexy. Ah, there's a few datasets, ah, that people use that are out there, that's what people use, and there's not a lot of analysis done on, on how well these datasets capture different truths about the world, how problematic they might be, [NOISE] um, but it's a pretty wide area that needs a lot of future, like lea- needs a lot of future additional work. Um, [NOISE] so we're going to understanding the data skews and the correlations. If you understand your data skews and the, ah, correlations that might be problematic in your data, then you can start working on either models that address those, or data augmentation approaches in order to sort of make the dataset a little bit better or a little bit more representative of how you want the world to be. Um, it's also important to abandon the single training set- testing set from similar distribution approach to advancing deep learning. So um, when we do projects in deep learning, you know, we tend to have the training set, and the testing set and then that's what we sort of benchmark on and prioritize, but the point is, as you move around different testing sets, you're gonna get vastly different results. Um, and so by keeping in this just sort of one training testing dat- training testing dataset paradigm, you're really likely to not notice issues that might otherwise be there. And one way to really focus in on them, is having a hard set of, of test cases, that you really wanna make sure the model does well on. So these are things that are particularly problematic. Things that would be really harmful to individuals, um, If they were to experience the output. Um, and you kinda collect those in a small test set and then it's really easy to evaluate on that test set as you benchmark improvements on your model, as you add different kinds of things to your model, in order to see, um, not just how your model is doing overall, in terms of your testing dataset, but how well you're doing in terms of these examples, you really want it to do well on. That you know that is going to be a problem if it doesn't do well on, and any sort of degradation in that, you might want to prioritize, um, to fix above degragaish- degradation and overall accuracy. Um, and it's also important to talk to experts about the additional signals that you can incorporate. Um, so we've put out a tool to help with this, ah, understanding data skews called facets, um, it's just available there. Um, and it's a really handy kinda visualizer for slicing, ah, understanding, um, you know, what some of the differences are between different subgroups and different representations and you can sort of dig in and explore a bit more. So this is just to sort of help people, ah, come to terms with the data that they're actually using and, and where there might be, um, unwanted associations or, or missing, missing kind of features. [NOISE] Um, another approach that's been put forward recently, ah, specifically on the data side is this data, datasheets for datasets approach. Um, so this is this idea that when you release a dataset, it's not enough to just release the dataset with like some pretty graphs and like talking about basic distributional information, you need to talk about who the annotators were, where they were, what the inter-annotator agreement was, what their background information was, um, motivation for the dataset. All these other kinds of details. So now you actually know that this isn't just a dataset, this is a dataset that has these specific biases. There's no such thing as a dataset that isn't biased in some way. A dataset by virtue of the fact that it's collected from the world as a subset, is a, is a biased set of the world in some way. The point is to make it clear what it is, how it is biased, what are the, what are the various biases, ah, that are important to know about in the dataset. So that's one of these ideas between- behind datasheets for datasets, releasing its datasets publicly. All right. Now let's switch a little bit to machine learning. Um, so there are a couple of techniques that I like to use. Um, I'll talk about two. One, ah, is bias mitigation, which is removing the signal for a problematic output. Um, so removing, ah, stereotyping, sexism, racism, trying to remove these kind of effects from the model. Um, this is also sometimes called de-biasing or unbiasing, but that's a little bit of a misnomer because you're- you're generally just kind of moving around bias based on a specific set of words for example, um, so to say it's unbiased is is not true. Um, but you are kind of mitigating bias with respect to some certain kinds of information that you provide it with. Um, and there's inclusion which is then adding signal for desired variables. So that's kind of the opposite side of bias mitigation. So increasing model performance with attention to subgroups or data slices with the worst performance. Um, so, ah, in order to, er, address inclusion, ah, kind of adding signal for under-represented sub-groups, one technique that's worked relatively well is multi-task learning. Um, so I've heard that you guys have studied multi-task learning which is great, um, so I'll tell you a bit about a case study here. Um, so this is work I did, ah, in collaboration with a UPenn World Well-being Project, ah, working directly with clinicians, and the goal was to create a system that could alert clinicians if there was a suicide attempt that was imminent. Um, and they wanted to understand the feasibility of these kinds of diagnoses when there were very few training, ah, training instances available. So that's similar to kind of the minority problem in datasets. Um, [NOISE] And, uh, in this work, we had two kinds of data. One was the internal data which was the electronic health records, um, with the- that was either provided by the patient or from the family. Um, it included mental health diagnoses, uh, suicide attempts or completions, um, if, if, if that were the case along with, uh, the user's, uh, the person's social media data. And that was the internal data that we did not publish on, but that we were able to work with clinicians on in order to understand if our methods were actually working. Um, the external data, the proxy data, the stuff that we could kinda publish on and talk about, was based on Twitter. Um, and this was, uh, using regular expressions in order to extract, uh, phases in Twitter feeds that had something that was kind of like diagnoses. So something like, I've been diagnosed with X, or I've tried to commit suicide. And that became kind of the, the proxy dataset and the corresponding social media feeds for, for those individuals, uh, for the actual diagnoses. Um, and the state-of-the-art in clinical medicine, uh, kind of until this work, there's been more recently but, uh, it's, it's sort of this single task logistic regress- lo- lo- logistic regression setup. Where you have some input features, and then you're making some output predictions like true or false. Um, you can add some layers and start making it deep learning which is much fancier. Um, you can have a bunch of tasks in order to do a bunch of logistic regression tasks for a clinical environment. Um, or you can use multitask learning, uh, which is taking the basic deep learning model and adding a bunch of heads to it, uh, predicted jointly at the same time. Um, and here we had a bunch of diagnosis data. So, um, we predicted things like depression, anxiety, uh, post-traumatic stress disorder. Um, we also added in gender because this is something that the clinicians told us actually, uh, had some correlation with some of these conditions, and that they actually used it in making decisions themselves, for whether or not someone was likely to, uh, attempt, uh, suicide or not. Um, and this also used this idea of comorbidity. So multi-task learning is actually kind of perfect for comorbidity in clinical domains. So comorbidity is, um, when you have one condition, you're a lot more likely to have another. Um, so people who have post-traumatic stress disorder are much more likely to have depression and anxiety. Um, and depression and anxiety tend to be cormorbid, so people who have one often have the other. So this points to the fact- this points to the idea that perhaps there's some underlying representation that is similar across them, that can be leveraged in a deep learning model, with individual heads further specifying, uh, each of the different kinds of conditions. Um, and so what we found was that as we moved from logistic regression to the single task deep learning to the multi-task deep learning, we were able to get significantly better results. And this was true both in the suicide risk case where we had a, a lot of data, as well as the post-traumatic stress disorder case where we had very little data. Um, the behavior here was a little bit different. So going from logistic regression to, um, single task deep learning, when we had, um, a lot of data, uh, as we did with the suicide risk, um, had the single task deep learning model working better than the logistic regression model. Um, but when we have very few instances, uh, this is where the deep learning models really struggled a lot more. Um, and so the logistic regression models were actually much better. But once we started adding heads for the cormorbid different kinds of conditions, the different kinds of tasks, um, that related to, you know, whether or not the person might be committing suicide, um, we were able to, uh, bump the accuracy way back up again. Um, and, it, you know, it's roughly 120 at-risk individuals that we were able to collect, uh, in the suicide case that we wouldn't have otherwise been able to, to notice as being at risk. Um, one of the approaches we took in this was to contextualize and consider the ethical dimensions of releasing this kind of technology. So, um, it's really common in NLP papers to give examples. Um, but this was an area where we decided that giving examples of like depressed language, could be used to discriminate against people, like at, you know, job, interviews, or something like that, you know, the sort of armchair psychology approach. So we decided that while it was important to talk about the technique, and the utility of multitask learning in a clinical domain and for bringing in inclusion of underrepresented subgroups, it had to be balanced with the fact that there was a lot of risk in talking about depression, and anxiety, and how those kinds of things could be predicted. Um, so we tried to take a more balanced approach here, um, and since then I've been putting ethical considerations in all of my papers. Um, it's becoming more and more common actually. Um, so another kind of approach that's now turning this on its head, where you're trying to remove some effect, um, mitigate bias in some way, is adversarial multi-task learning. So I just talked about multi-task learning, and I'll talk about the adversarial case. Um, and the idea in the adversarial case is that you have a few heads. Um, one is predicting the main task, and the other one is predicting the thing that you don't want to be affecting your model's predictions. So for example, something like whether or not someone should be promoted based on, uh, you know, their performance reviews, and things like that. Um, you don't want that to be affected by their gender. Ideally, gender is independent of a promotion decision. And so you can, uh, you can create a model for this that actually, uh, puts that independence, um, criteria in place by saying, uh, I want to minimize my loss on the promotion, while maximizing my loss on the gender. And so how we're doing that is just predicting gender, and then negating the gradient. So removing the effect of that signal. Um, this is another adversarial approach. So you might have been familiar with like generative adversarial networks. So this is like two discriminators, uh, two different task heads, uh, where one is trying to do the task that we care about, and the other one is removing the signal, uh, that we really don't want to, um, uh, be coming into play in our downstream predictions. Um, so this is a way of, uh, kind of putting this into practice. So the probability of your output, uh, predicted output given the, the ground truth and your sensitive attribute like gender, um, is equal across all the different, uh, sensitive attributes or equal across all the different genders. Um, and that's an example of equality of opportunity in supervised learning, being put into practice. So this is one of the key fairness definitions. It's equivalent to, uh, equal recall across different subgroups as I mentioned earlier. Um, and that's a model that will actually, uh, implement that or help you achieve that. Um, where you're saying that a classifier's output decisions should be the same across sensitive characteristics given what the, what the correct decision should be. Okay, so how are we on time? Cool. Are there any questions so far? Are we good? Okay, cool. So I'm gonna go into a little bit of a case study now, an end-to-end, uh, system that Google has been working on, uh, my colleagues have been working on, uh, that is in NLP domain and deals with some of these bias issues. Um, so you can find out more about this work, um, in papers at AIES in 2018 and FAT* tutorial 2019, um, called Measuring and Mitigating Unintended Bias in Text Classification. Um, and this came out of Conversation-AI which is a, uh, which is a product that's, um, like it's part of this- it's called a bet at Google. It's a kind of spin-off company called Jigsaw that focuses on trying to like combat abuse online. Um, and the Conversation-AI, uh, team is trying to use deep learning to improve online conversations. Um, and collaborate with a ton of different, uh, different people to do that. Um, so how this works is, oh you can try it out too, on perspectiveapi.com. So given some phrase like you're a dork, uh, it puts out a toxicity score associated to that like 0.91. [NOISE] Um, and the model starts sort of falsely associating frequently attacked identities with toxicity. So this is a kind of false positive bias. So I'm a proud tall person gets a model, uh, toxicity score of 0.18. I'm a proud, uh, gay person gets a toxicity model score of 0.69. And this is because these- the term gay tends to be used in really toxic situations. And so the model starts to learn that gay itself is toxic. But that's not actually what we want, and we don't want these kinds of predictions coming out of the model. Um, so, uh, the bias is largely caused here by the dataset imbalance. Again, this is data kinda coming and wearing its hat again. Um, so frequently attacked, uh, identities are really overrepresented in toxic comments. There's a lot of toxicity towards LGBTQ identities, um, it's really horrible to work on this stuff that like really [LAUGHTER] it can really affect you personally. Um, uh, and, uh, one of the approaches that the team took was just to add nontoxic data from Wikipedia. So helping to- helping the model to understand that these kinds of terms can be used in, you know, more positive sorts of contexts. One of the challenges with measuring, uh, how well the system was doing is that there's not a really nice way to have controlled toxicity evaluation. Um, so in real-world conversation, it can be kind of anyone's guess what the toxicity is of a specific sentence. Um, if you really wanna control for a different kind of subgroups or intersectional subgroups, and it can be even harder to get, uh, a real good data to evaluate properly. So what the team ended up doing was developing a synthetic data approach. Um, so this is kind of like a bias Mad Libs. Um, where you take template sentences [NOISE], um, and you use those for evaluation. This is the kind of, um, evaluation you'd want to use in addition to your target downstream ah, kind of dataset. But this helps you get at the biases specifically. So, um, some template phrase like I am a proud blank person, and then filling in different subgroup identities. And you don't want to release the model unless you see that the scores across these different kinds of, uh, these different kinds of template sentences with synthetic, uh, the synthetic template sentences, um, are relatively kind of the same across, ah, yeah. All of the different model runs. Cool. Um, so some assumptions that they made in this was that the dataset, um, uh, didn't have annotated bias and they didn't do any causal analysis because they were just trying to focus in particular, um, on this toxicity problem. Um, they used the CNN, ah, convolutional, yeah you guys know, blah, blah, blah. Uh, with pretrained chain GloVe embeddings. This is probably like your bread and butter. Pretrained GloVe embeddings. I'm sure you know all about this in Word2vec. Cool, uh, Keras implementation of this. Um, and, uh, and using these kind of data augmentation approaches, um, both a Wikipedia, uh, kind of approach as well as actually collecting positive statements about LGBTQ identity. So there's this project called Project Respect at Google, where we go out and, and talk to people who identify as queer or people who have friends who do, and like talk about this in a positive way, and we add this as data. Um, so we can actually know that this is can be a positive thing. Um, and in order to measure the model performance here, um, again it's looking at the differences across different subgroups and trying to compare also the subgroup performance to some sort of general distribution. So here they use AUC, um, where AUC is essentially the probability that a model will give a randomly sel- selected positive example, a higher score than a randomly selected, uh, negative example. So, um, here you can see some toxic comments and nontoxic comments with a example sort of low AUC. Um, here, ah, this is an example with a high AUC, so the model is doing a relatively good job of separating these two kinds of comments. Um, and there are different kinds of biases that they've defined in this work. So, uh, low subgroup performance means that the model performs worse on subgroup comments than it does, ah, on comments overall. And the metric they've introduced to measure this is called subgroup AUC. Um, another one is subgroup shift. And that's when the model systematically scores comments, um, from some subgroup higher. Um, so this is sort of like to the right. Um, and then there's also, uh, this Background Positive Subgroup Negative shifting to the left. Yeah. Um, yeah that's sort of saying what I said. It can go either way to the right or the left and there's just kind of different metrics that can define each of these. Cool. Um, and the results in this, ah, sort of going through not only just looking at, you know, qualitative examples, um, and general evaluation metrics, but also focusing in on some of the key metrics defined for this work, these sort of AUC-based approaches. And they were able to see significant differences in the original release which didn't account for any of these unintended biases, and downstream releases, uh, which did, which incorporated this kind of normative data that said the sort of things that we thought the model should be learning. Cool. Um, so, um, the last thing to keep in mind as you sort of develop and, and work towards, uh, creating deeper better models is to release responsibly. Um, so this is a project I've been working on with a ton of different people called Model Cards for Model Reporting. It's, uh, it's a little bit of like the next step after Datasheets for Datasets, um, where, um, Datasheets for Datasets focuses on information about the data. Ah, Model Cards for Model Reporting focuses on information about the model. Um, so it captures what it does, how it works, why it matters. Um, and one of the key ideas here is disaggregated in intersectional evaluation. So it's not enough, uh, any more to put out human-centered technology that just has some vague overall score associated to it. You actually need to understand how it works across different subpopulations. And you have to understand what the data is telling you that. Um, so here's some example details that a model card would have, um, who it's developed by, what the intended use is, so that it doesn't start being used in ways that it's not intended to be used. Um, the factors that are likely to be affected by disproportionate performance of the model. Um, so different kinds of identity groups, things like that. Um, the metrics that, ah, that you're deciding to use in order to understand the fairness of the model or the different performance of the model across different kinds of subgroups and factors, information about the evaluation data and training data. Um, as well as ethical considerations, um, so what were some of the things you took into account or what are some of the risks and benefits, um, that, uh, that are relevant to this model? Um, and additional caveats and recommendations. So for example, in the conversation AI case, they're working with synthetic data. So this is the sort of limitation of the evaluation that's important to understand, uh, because it can tell you a lot about the biases, but doesn't tell you a lot about how it works generally. [NOISE] And then the key component in the quantitative, uh, section of the model card is to have this both intersectional and disaggregated evaluation. And from here, you trivially get to different kinds of fairness definitions. The closer you get to parity across subgroups, the closer you're getting to something that is mathematically fair. Okay. So hopefully by paying attention to these kinds of approaches, taking into account all these kinds of things, we can move from majority representation of data in our models to something more like diverse representation, uh, from our ethical AI. Okay. That's it. Thanks. [APPLAUSE]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_16_Coreference_Resolution.txt
Hi everybody, time to get started. Okay. Um, so, so today what we're gonna talk about is a topic that's, um, coreference resolution and I'll explain in just a minute what that is, um, but before getting on to that just a, uh, couple of words on the announcements. Um, so the TAs are feverishly working on getting homework five grades worked out, so we hope that we can deliver those to you, um, tomorrow just in case you're anxious to know them before you make your final decisions about things. And then, the other thing that you should be remembering is that the milestone for the final project is this Tuesday. Now, I will confess that even to me it seems like, "Boy, boy this milestone came around really quickly." So you probably feel that doubly, I realize. And so you know, I do apologize for that a little bit, but you know, really our hope was that we could actually use this to be helpful, and to give you feedback on what you're doing and suggestions, and it just really seemed like, well, the only chance in which we can kind of, um, turn around giving more feedback on the projects, um, before it goes into the final week of the quarter is if we can kind of get stuff, um, Tuesday, and hope to be then, sort of turning it around again by the end of the week. So the hope is to help you not to just, um, create obstacles and roadblocks in your life. Okay. So today what we're gonna do, um, is, uh, learn more about a linguistic topic for a change and learn some more stuff about what goes on in coreference resolution. So first of all, I'm gonna talk about the task, and then go on to some of the kinds of models that people, um, do for coreference resolution. So first of all, what is it? Um, so the idea of coreference resolution is what we do, which we have a text, "Barack Obama nominated Hillary Rodham Clinton as his Secretary of State on Monday," and this text like most texts are about entities, where entities are commonly human beings, but they can be other things like God saw talking giraffes or whatever it is. So it seems like we want to make, find where entities are mentioned. So my entities are mentioned, they're referred to as mentions. So things like Barack Obama and Secretary of State, he, her, they are mentions of entities. And then, when we talk about coreference resolution, the task that we're wanting to do is say, which of these mentions refer to the same entity, the same real thing in the world. So well, one entity that's mentioned in this text is Barack Obama, and then he's referred to later in the text as his and he, and so these three red noun phrases are all coreferent to each other. And that then, refers to this real-world entity. Um, and then, we have these references Hillary Rodham Clinton, Secretary of State, her, she, First Lady, they're all references to a different entity. And so they all refer to this person. And so those are examples of our coreference. Um, in a way this is triv- sort of seems obvious to a human being, um, looking at things, um, but it can actually be kind of tricky and hard. Um, so, um, I thought we could spend a few minutes doing interactive working out coreferents together so that you guys can, um, think about it all for a few minutes. Um, so here's part of a little story. Um, it's a story by Shruthi Rao called The Star. Um, now, I confess that since this is a CS class, um, not a literature class, I did a little bit of, um, helpful editing of this text to make it shorter, so I could fit more of, what was going on, um, onto the page, um, but, um, everything that is a sort of a linguistic [inaudible] is something that comes from the original text. Okay. So, um, in this text, um, who is the first entity that's mentioned? Vanaja, okay. Okay. So it's Vanaja. Now, where, let's do it forward. Where else is Vanaja mentioned in this text? Her son, right? So this her not the son, but this her is a reference of Vanaja, right? Um, she resigned. Okay. After that? She bought. Okay. So there's another she. Was there another reference before that? Herself, right? So herself is also a reference to Vanaja. Um, okay. So then, it's again, she made this, she, okay. So we've done Vanaja. Okay, that's a good start. Okay. So then, um, we've got Akhila. Okay. Um, where's Akhila next referred to? As Akhila. Okay, there we go. Um, are there other references, um, to Akhila? Maybe not. Okay. What's the next entity that's mentioned? Prajwal. Okay. So what other references are there to Prajwal? They. They? Okay. So here's a tricky one, right? So this they, I mean, who does that refer to? It occ- refers to Prajwal and Akash. Yeah, so this they refers both to Prajwal and this Akash. So that's, that's something that happens in human languages. This is referred to as split antecedents, where you have one thing that they, that's sort of referring to two distributed things that came before it. Um, so here's one of my first sad admissions of natural language processing technology. None of the NLP systems that we're gonna talk about later today or in general that have been built deal with split antecedents. They automatically lose as soon as there's split antecedents. Um, so that's a bit sad, um, but that's the state of technology. So it's something, um, we could still work to improve, but okay there's this sort of they that's kind of half Prajwal. Um, okay. So there's directly Prajwal here, but was there another place early in the text that Prajwal is effectively mentioned? Yeah. So Akhila's son is really another mention of Prajwal, right? Okay. Um, okay. Um, any other mentions of Prajwal? Maybe not. Okay. Then we go on. Okay. Who's the next entity? Akash. So we have Akash here, and that then again, we have that her son referring to Akash. Um, and here was Akash. Okay. What other, what other mentions of Akash are there? Okay so there's another Akash here, um, fourth him. Okay. Uh, there's another Akash. Okay, um, but, so, um, here. Okay. So are the obvious Akash's. There's sort of a tricky case here which you could wonder what the right treatment of this, right? You know, it's sort of says Akash was to be a tree, all right. So in some sense the tree is Akash. Um, so really in terms of reference in this story, the reference of the tree is the same as Akash. And you could think, um, that means you should treat the instances of, um, the tree, the, the instances here of the tree, and later on when the nicest tree right that really, that's sort of this Akash as well. That doesn't quite feel right, but this is something that comes up in coreference, right? So here we have a sort of a predictive construction um, with, you know, B. And when you set, when you have sentences such as like, um, you know, my child is the smartest kid in the class or something like that, in some sense, you're sort of saying that the smartest kid in the class has the same reference as my child. And some systems count links over that kind of predication, and say that is coreference whereas other ones don't and think that that's not quite reasonable. So different things go on. Okay. So, um, those, those are fair number of entities. I mean, so there are obviously lots of other things that are mentioned, um that sort of, um, right? So there's the local park, right, that's a mention of some entity. Um, there's, um, the school, um right? So there's this school here and so that the school is coreferent with pre, the preschool right here, right? Um, and then there's, um, again this sort of tricky one, of how to treat the naughty child Lord Krishna because, you know, in some sense Prajwal is representing that. And then there are lots of other entities that are mentioned, right? There's a t-shirt, and there's trousers, um, and, um, things like that. Another tricky thing that turns up here when you get later on into the story is you can have entities that have parts. So we not only have a tree, but that tree then has a lot of parts, right? So the tree has a trunk, and the tree has foliage, um, and things like that. And there are these red balls that are representing fruits, right? So there's a lot of stuff that's somehow connected together and somehow separate. And that sort of, that doesn't fit terribly well with the kind of models we use with coreference either because really we make our coreference, um, reference models basically out of this notion of entities. Um, but somehow there's this complexity that, you know, human beings have parts too, right? We have hands and faces, and we can't say, oh, that's a separate entity, but they're somehow in, um, involved with the other entity. Okay. Um, hope that's sort of useful to give some idea. Why is coreference resolution useful? Um, so there are all kinds of things that we'd like to do well in natural language processing that you really can't do well unless, uh, you know how to do coreference resolution. So anything that we want to do in terms of question-answering, summarization, extracting facts from texts or anything like that, there are places we are gonna fail unless we can do coreference resolution. Because if we're reading a piece of text, and it says he was born in 1961, um, we can get a fact out or answer a question, if we can work out who he was, but we probably can't otherwise. Um, there are, there's sort of another place that where this is very useful is in machine translation, so that lots of languages drop pronouns. So you don't have to give explicit pronouns, but you need to be able to work out how to fill them in. And this is making coreference decisions about, um, arguments of verbs. And so here are a couple of examples, um, that, um, covering from Spanish to English. So in Spanish, you can freely drop the subjects of verbs and in these sentences, in the because clause, there's no overt subject. And so he gets Alicia likes Juan because he's smart. And so Google Translate is stuck in a he and that is right. And to stick in that he, it's implicitly making a coreference decision and saying, "Okay well, the subject of this, um, adjective smart should be Juan who's male, and therefore, I should say he." But, you know, the reality is Google Translate knows nothing about coreference and making these coreference decisions. And as has been um, covered quite a bit in the media now and I think came up earlier in an earlier class, that, um, Google Translate mainly just defaults to male default. Um, so if you sort of swap- sweep it, uh, if you flip it around and say, Juan likes Alicia, it also says because he's smart. Uh, whereas probably it should be because she's smart in that case. And indeed you notice the bad effects of that everywhere. So many languages, um, Turkish, Indonesian, um, don't actually have gender, so that they're much less sexist languages than English, French or Germany is. But what happens, um, when you then translate where you just have a generic pronoun that means third person pronoun, um, that Google Translate is essentially using its language model, which means that reconstructs, um, the worst of stereotypes of she is a cook, and he is an engineer, he is a doctor. And well, in a connected piece of this course, if you'd like Google Translate to be able to do better than that, well again, what would be required is that you could actually do coreference resolution and track along the actors in the text as you go along. Um, one final example we haven't really talked about yet, but we'll get back to soon now because the class is almost over is doing things with dialogue agents or chat systems. That, as soon as you are going to do anything more than a single turn, um, dialog, that you need to start dealing with reference. So if you've got something like, um, booked tickets to see James Bond, um, then you want to say something like, "Spectre is playing near you at 2:00 and 3:00 today. How many tickets would you like?" Um, two tickets for the showing at three. That as shown in the color, there are various kinds of reference going on here where things have related reference, but it's kind of complicated here. And this is something that we'll come back to in a moment. So James Bond and Spectre aren't obviously the same thing, but in a context like, um, booking movies, they are the same thing because one is the name of a character in a movie series, and the other is the name of a movie that's currently showing that belongs to that, so that they're sort of associated, um, in a sort of subtle way that isn't exact identity, but is relevant to a lot of the things that we want to do. I'll come back to that in a little bit when we talk a bit more about the linguistics of this. Okay. So if we want to do the task of coreference resolution, there are essentially two steps. So the first step is gee, we want to work out what mentions there are in the text that we should be doing something with. And this one is effectively pretty easy, but I'll have just a few slides on that immediately. And then what the bulk of the class is gonna be on is, um, working out coreference between mentions. And if you think about this, coreference is essentially a clustering task. Because if you do the first task, you have a set of mentions and then you want to be saying well, how can I group these into clusters that have the same reference? And so that's what we're going to look more at doing. So quickly on mention detection. So, um, for mention, we wanna find all the spans that are candidates for, um, referring to some entity. And the answer to what these, um, candidates are is basically they're all the noun phrases in the text. And so normally people think of there being three types of mentions that we identify. There are pronouns, I, you, he, she, it, etc., that are, um, referring to different entities. They're explicit names of people like that was that Barack Obama and Hillary Clinton examples. And then many of the tricky examples, and then when we have common noun phrases like a dog or the big fluffy cat stuck in the tree. That the big fluffy cat stuck in the tree is a mention. Um, it's actually a complex mention because it also has embedded inside it other mentions. Um, so the tree is also a mention. Okay. So how can we detect mentions? Well, one answer is to say, well we've looked at, um, various other NLP systems on and off. And we can just use those NLP systems as preprocessing systems to find mentions. So for pronouns, they're part of speech taggers that say what's a noun, or a verb, or a pronoun, and so we can run those and find all the pronouns and we're done. From- for, um, the names of things like Barack Obama. We've talked a couple of times about named entity recognizers, so we can run those and find all the named entities. Um, then for common noun phrases, that's sort of where we need parsers to find the structure of the sentence and find where the noun phrases are. And we have talked about dependency parsers and well, one choice is you can use a dependency parser to find the sort of nominal arguments, and work with them. That's sort of actually a little bit subtler than just sort of wanting to pick out spans that refer to common noun phrases. So the other notion of parsing which we come back to, um, next week is constituency parsing. In some sense, constituency parsers are the simplest way to find mentions for this process. Um, most of it seems and is easy, um, there are sort of tricky cases as to what counts as a mention or not. So, um, if it's kind of it is sunny, I mean, is it a mention of something? It's sort of seems like it's not really, it's just it seems like it's an it that you stick at the start of the sentence, um, that doesn't mean anything. So that's maybe not a mention. Um, every student. Is every student a mention? I mean, it's certainly, at best it's some kind of collective, um, but it's not sort of a very clear concrete reference, um. That goes further, if I sort of use different quantifiers, so if it was like, every and no are called quantifiers. I mean no student definitely doesn't have reference, because it's not pointing at anything, right? It's asserting a claim of nonexistence. So that there's definitely, um, no- it isn't a mention of anything. Um, yeah, the best donut in the world. Um, does that have reference? Um, that's unclear. This is the kind of thing that actual philosophers of language debate over, right? So if there was agreement on what the best donut in the world is, then maybe it has reference, um, but I can say sentences like, I'm searching everywhere to find the best donut in the world. And then in that sentence, it doesn't have any reference, right? It's sort of an intentional description of what I'm hoping to find, that there's no concrete thing it refers to. Um, things like quantities, 100 miles. That sort of behaves like a noun phrase, but it is in- it's sort of really a quantity that doesn't really have reference. Um, and so then there's the question of how can you deal with this stuff? Um, well, um, our tool whenever we want to deal with stuff, is we train classifiers, as in they pick out things that are mentioned and things that aren't. And so that's something that you could do is write a classifier that filters out, um, these spurious things that you want to say aren't really mentions. And people absolutely have done that. But commonly actually people skip that step, and you just sort of instead have your mention detector find all candidate mentions. Because it turns out that that tends to work pretty well. Because after we found all of our mentions, um, we're then going to be doing this clustering process to find coreferent mentions. And if there are just a few stray mentions like no student and we don't cluster them wrongly with anything else, it kind of doesn't do any harm because we are mainly involved in this clustering process. Okay. Um, something you might be wondering is, well I've sort of implied now, we have a pipeline. I'm saying we're going to run a part of speech tagger, and we're going to run a named entity recognizer, and we're going to run a parser. And we're going to run a, um, a named mention detector. And then eventually, we're going to run this coref clustering system, so we have a sort of a five-step pipeline. Um, is that the only way you can do, um, coreference resolution? And the traditional answer was yup, that's the way you did coreference resolution. That essentially, all systems for coreference resolution, until approximately 2016 where a pipeline that went through about those stages. Um, but just recently and I will dico- cover one such system, um, later in the class, um, that people in the neural world have started doing what's been effective in a lot of places in the neural network world of saying, can we just build an end-to-end coreference system that starts with just plain text of a paragraph, and feeds out coreference clusters without there being any intervening pipeline steps? And I'll show you a bit more about how that works. Um, but before we get into systems, I just wanted to say a little bit more about the linguistics of coreference. Um, there's actually quite a lot of interesting stuff here, and to a fair degree, it's not actually stuff that's been thought about very much by people who build NLP systems, right? I already mentioned, um, from the Shruthi Rao story, um, the example of split antecedents, right? That that's just a clear linguistic phenomenon that happens, and it's not even incredibly rare, right? Um, that, you know, um, people build these simple machine learning models that just can't deal with that. And there's really quite a bit more structure to what happens in the linguistics of coreference, it isn't really being exploited in most of the systems people bui- build. So I just wanted to show people a bit more of that. And essentially, to sort of understanding, um, more about how people see things linguistically, there are two concepts that are related and commonly confused, that are really different. So one is coreference. So we say that things are coreferent when there are two mentions and they refer to the same entity in the world. So if it's sort of, um, Donald Trump and the current president, right? They're two mentions and they refer to the same person in the world. And so that is a relationship of coreference. Um, and that's then contrasted, um, with anaphora. And so the idea of anaphora is some terms in text don't have independent reference, and you work out their reference by relating them back to another thing in the text. So if we have the sentence, Barack Obama said he would sign the bill. He is an anaphor. And if I just say, he, what does he refer to in the abstract? Well, you know, apart from saying something male, right? You've got no idea, right? Because you can't work out what he means just by knowing he. You have to be looking at a text and interpreting it relative to the text. And then if you're interpreting it, um, relative to the text, you're then in this situation of, okay I see, this refers back to Barack Obama. So he is another mention of Barack Obama, then- and this then is this concept of anaphora. So the picture we have is sort of like this, that you can either have these independent mentions, which do refer, um, to the same thing in the world. They're coreferent. But in many cases, such as when they're full mentions like President Obama, versus Barack Obama, they don't have any textual relationship. It's just they happen to refer to the same thing in the world. And that then contrast with cases like Barack Obama said he would do something, where the he has a textual relationship back to Barack Obama. And that's an example of anaphora. Um, this might up until now feel like an almost meaningless distinction. But something that maybe gives you more of a sense that there's something useful here is, um, these textual relationships exist even when there isn't coreference. So we sort of mentioned before, these cases like no dancer, right? So no dancer doesn't have reference, right? It refers to nothing. Um, but if you have a sentence like, "no dancer twisted her knee," well we have an anaphor here. And that anaphor is referring back to "no dancer" despite the fact that "no dancer" doesn't have reference. So we can still have the anaphoric textual relationship. And indeed, you know, her knee is then a part of her. And so these are the sort of part relationships again. But her knee, in a sense that I'll just come back to, is also an anaphor which is interpreted with respect, um, to the dancer. So we have two anaphoric relationships here, even though we have no reference. There's another interesting case of anaphoric relationships which aren't the same as reference, which is you could have looser forms of anaphoric relationships. So you get lots of sentences like this. "We went to see a concert last night, the tickets were really expensive." So we have this mentioned here of the tickets. Um, but really to interpret the tickets, we have to interpret them with respect to this, um, mention back here, a concept, because really what this is saying, the tickets for the concert were really expensive. So this is also referred to as an anaphoric relationship, where the meaning of the tickets has to be interpreted textually based on another, um, noun phrase. But it's not a coreference relationship that the concert and the tickets are clearly two different entities. So these kinda looser cases are referred to as bridging anaphora, because you sort of have to supply for yourself the bridge, the relation that connects together the antecedent and the anaphor. Okay. So that's how- we then have these pictures, that we have this sort of not in- not complete crossovers between coreference and anaphora that we've sort of talked about. Um, I have one other note on anaphora. Um, Who- has anyone here ever done any Ancient Greek? Any Ancient Greek? [LAUGHTER] Yes. Okay. Um, so, um, from- from the origins of the words anaphora, anaphora is meant to be that you're finding your textual reference before you. Um, and so there's actually a- a complementary, um, term of art which is referred to as cataphora where you're finding your reference after you. Um, so here is a beautiful example of cataphora. So this is from Oscar Wilde's, The Picture of Dorian Gray. "From the corner of the divan of Persian saddle-bags on which he was lying, smoking, as was his custom, innumerable cigarettes, Lord Henry Wotton could just catch the gleam of the honey-sweet and honey-colored blossoms of a laburnum." Um, right. So here we have this, um, mentioned, Lord Henry Wotton and there are two anaphors, um, that refer to Lord Henry Wotton. Um, he and his, and that they both come before, um, Lord Henry Wotton. And so these are referred to, um, as instances of cataphora among a certain kind of classical scholar. Um, and in case you don't know what a laburnum is, um, this is a laburnum. [LAUGHTER] Right. But, yeah, so thi- this is cataphora. Now- now there are two sad things to say. Um, the first sad thing is in modern linguistics, the term cataphora is completely disused. And we mean- we just used the word um, anaphors everywhere as meaning a word that gets referenced from some other mention in the text and it doesn't matter what side it's on. Um, so, um, that we go downhill one stage to linguistics but then we get to NLP and we go downhill a second stage. Because what you'll see is that in general, the systems that people are building for, um, reference resolution, they don't make any distinction of direction at all. That once you find a mention, you're always looking backwards for its reference. Um, and you've got no idea that, well, maybe sometimes you could look forwards. So effectively, what it means, that the systems end up doing is saying, well, there's a he here, there are various other things, there's a his, etc., and you'll eventually get to Lord Henry Wotton and you'll be able to be trying to find its reference by looking backwards, even though that's sort of ill-formed from any kind of linguistic sense whereas really he and his that should have been looking for their reference forward. Okay. Um, is everyone good up to there, any questions? Okay. We'll move ahead and, um, try and move on to kinds of coreference, um, models. So I wanted to, um, tell you, um, as much as I can and I have 45 minutes, um, left about, so the kinda models people build with coreference. And I hope to mention quickly four different ways that people have looked at coreference. I wanna tell you a teeny bit about classical rule-based coreference. Um, then, um, mention- mention pair coreference. Spend the most time on mention ranking systems which have tended to be the easiest simple systems. And then just say a little bit about clustering systems which should be the right way to do it but in practice has been a way that's been hard to get the best performance from. Okay. So here's a bit of history. Um, this guy here is Jerry Hobbs. He just had his retirement party from University of Southern California last month. Um, so Jerry Hobbs, way back when, um, wrote a famous paper, it was in 1976 on coreference resolution. And in that paper, um, he proposed, um, what's normally now referred to as the Hobbs Algorithm. But actually, um, in his paper, he refers to it as a naive algorithm. Um, and I'll come back to that distinction in just a moment. Um, but what the Hobbs algorithm was, is if you have a sentence- so actually I should say this, this algorithm is just for finding the reference of pronouns. So one can extend out to other cases but the part I'm gonna show you is just the part for doing the reference of pronouns. So when you find out, find a pronoun and you wanna say what is it, um, coreferent with? What you're going to do is run this mechanical algorithm that's looking at a parse of a sentence and is working out what to do with it. Begin at the NP immediately dominating the pronoun, go up the trees or the first NP or S. Call this X and the path p, traverse along, ah, it goes on and on. Um, there's more of it. That was only the beginning of it. There are a lot more stages. Um, but, you know, I'm not- I don't really wanna go into the details of this. Um, but, you know, to try and explain the flavor of it, here's a piece of text. "Niall Ferguson is prolific, well-paid, and a snappy dresser. Stephen Moss hated him." Um, and so if you can remember any of the steps of that algorithm, here's our, um, pronoun him. Um, and then, what it said to do was begin at the NP, the noun phrase above the pronoun. And then it said, to go up to the first noun phrase or S above that, um, here is the S above that. Um, and then what you're meant to do is, from there, you're meant to go left to right through stuff that came before that. So there's a lot of cleverness in this handwritten algorithm. You know, this is in the space of clever handwritten algorithms. And so what this is reflecting is that you might just think you should go to the closest thing to find reference, but actually if you have reference within the same sentence, it's much more common for the sort of highest syntactic roles to be what you're coreferent with. So you're more likely to be coreferent with a subject than an object, and you're more likely to be coreferent with an object than something like a noun phrase and that's inside a prepositional phrase that follows the object. So we're gonna start from the left here and we're gonna say here's a noun phrase, Stephen Moss. That's the first one we come to. And then there's this clever bit of text that says, um, traversal branches, um, below X, that are to the left- left to right, propose as antecedent and noun phrase, um, that has a noun phrase or sentence between it's an ec- in the S. So it was saying, this will be a candidate, if and only if, there's some other noun phrase or S in-between. Um, and so what that's saying is Stephen Moss hated him. It- this him cannot refer back to Stephen Moss and that sort of pretty much a fact of English syntax. But what it's wanting to do is distinguish between, another thing that we could have had here was a noun phrase that had another possessive noun phrase inside it. Um, so if we had something like Stephen Moss's mother hated him, right? Then the Stephen mother- Moss's mother hated him, then that would, in that case, it would be perfectly okay for him to be coreferent with Stephen Moss. And the algorithm allows that because relative to this noun phrase is another noun phrase above it and between. Okay. So that didn't work, um, as an antece- as an antecedent, so then we go onto the next step of the algorithm. And then, the next step says, we should proceed backwards through preceding sentences, um, right to left. And so that captures an important heuristic that proximity is actually a good heuristic to find coreference because coreference for pronouns is usually close by overall. And so we go to the first sentence back. And then in this sentence, again, we go into within the sentence, go left to right because there's the same kind of subject prominence role. And so we're gonna start in this sentence, and we're gonna say okay, here's a noun phrase. And now because we're in a different sentence, there's nothing wrong with this one. So we say, aha, we have a candidate, Niall Ferguson, um, is a possible antecedent and it's the first one we found. And therefore, we say that him refers back to Niall Ferguson. And this algorithm actually gives the right answer, if you could follow along all of that. Um, though that sounds like, um, horrible handwritten stuff. But, um, so Jerry Hobbs was aware of that this was horrible handwritten stuff, but he was interested in this algorithm for a couple of reasons. I mean, reason one is, you know, this is actually one of the first places in natural language processing, that someone produced the baseline, right. In for final projects and elsewhere, um, and stuff we gave you, right, it's seen now in NLP and other areas, that anything you are doing, the first thing you should do is have a baseline, a simple system and see how well it works. And this was his simple rule-based system for doing coreference, um, and he wanted to observe that actually this baseline was pretty good. It actually gave the right answer a lot of the time. And so the challenge was how to build a system that did better than this baseline. And so he was well aware of it, you know, it was a dumb algorithm, but he proposed that as a good baseline for doing coreference resolution. So what he was interested in, um, remember that we're back in the 1970s here, was how to do knowledge-based pronominal coreference resolution. And so, um, essentially what he was noticing is well, these kinds of syntactic factors that I was mentioning prefer subjects, prefer close by, etc, they're all useful predictors. But there are lots of cases where they don't give the right answer, and to know when they give, when, to know what's really the coreferent thing, you have to actually understand what's being described in the world. So if I have this sentence, she poured water from the pitcher into the cup until it was full. What is it coreferent with? Cup. [NOISE] The cup. Thank you. [LAUGHTER] Okay. So that, it refers to the cup. But then let's look at this example. She poured water from the pitcher into the cup until it was empty. What does it refer to? The [OVERLAPPING]. The pitcher. [LAUGHTER] Okay. So the crucial thing to notice in these two sentences is, these sentences have identical syntactic structure, right. So Jerry Hobbs's algorithm can't possibly work, um, for both of these sentences. It's gonna work for one of them, but not the other one. Um, since it's working from left to right within a sentence, it's gonna say the pitcher both times actually, right. So you can't get the answer right by Jerry Hobbs' algorithm and Jerry believed, and still believes, um, that the only way to get these kind of examples right, is actually if you understand the world, and you actually know what's going on in the world, so you can see what, what this is talking about. And there are lots of examples like this. Um, this is another very famous example. The city council refused the women a permit because they feared violence. Um, who does that they refer to? [inaudible]. The city councilors. Um, but here's another sentence. The city council refused the women a permit because they advocated violence. Who does that they refer to? The women. The women. Okay. So this time it refers to the women. Um, and again, you know, identical syntactic structure, it couldn't possibly be done right by the Hobbs algorithm. Um, so this particular pair of examples, um, comes from Terry Winograd. Um, how long ti- uh, so Terry Winograd was originally an NLP faculty, um, he sort of got disillusioned with NLP because there wasn't making much progress, um, and ventured off into the land of HCI, um, that became his career. Um, but in his early work, um, he was interested in these phenomena, and came up with this example. And so this example really stuck with people. And so these kind of contrasts are referred to by other people as Winograd sentences or Winograd schema. And so this is actually something that's interesting that's revived recently. Um, so Hector Le- Levesque, um, wrote a paper, I guess five years ago now, where he was trying to advocate for return to doing more in the way of knowledge and world modeling and artificial intelligence, and arguing that there are lots of problems that you just can't solve by the kind of crude statistical methods, that our machine learning systems are using. And that you really needed to do more world understanding. And so he proposed that these Winograd schema would be a good te- alternative to the Turing test, as a way of measuring intelligence. And actually they're just coreference decisions, right. So, um, so there's sort of a claim here that, if you can do a coreference right 100 percent of the time, you've solved artificial intelligence in that you're, sort of you can, can code knowledge of the world into coreference problems. Um, yes so people have then tried to work on these Winograd schemas, and Levesque's feeling was, you know, you just couldn't do these, um, using kind of, the kind of statistical factors, um, that people put into their machine learning systems. He was partly wrong about that because subsequent work, um, both neural systems and otherwise has shown that actually you can get f- a nontrivial distance with these kind of problems because, you know, if it is the case, um, that, you know, you can somehow see enough examples, where the city council refuses permits, fearing violence, you know. If you've go- if you're collecting your neural language model over tens of billions of words, you might have seen some instances of things like that, and you could sort of predict it just on statistical patterning. But the question is, you know, how far can you actually get doing that, without having a bit more of a world model? And so that was, you know, what Hobbs was interested in way back in 1978. So he wrote, the naive approach is quite good, computationally speaking it will be a long time before a semantically based algorithm, is sophisticated enough to perform as well. And these results set a very high standard for any other approach to aim for. He was totally right about that, um, that it really wasn't until the 2010s that anybody managed to produce an algorithm for pronominal anaphora resolution, that outperformed the Hobbs algorithm. Even though it was just, uh, what he called a naive algorithm, or he might call a crude set of linguistic rules. Um, but he says, yet there is every reason to pursue a semantically based approach, the naive algorithm does not work. Anyone can think of examples where it fails. In these cases it not only fails, it gives no indication that it has failed, and offers no help in finding the real antecedent. Um, so food for thought there. Um, but, um, notwithstanding that, I'm gonna just rush ahead at this point, and tell you about some of the, um, statistical and neural algorithms, um, that have been used for coreference resolution. So the simplest form of algorithm that's commonly used, is what is called mention pair models. So what we mean by mention pair models is, um, we are gonna take pairs of mentions, and we're gonna train a binary classifier that says, is coreferent or isn't coreferent. And so then we're gonna proceed left to right through the text. And every time we get to a new mention, we're gonna then evaluate our classifier with respect to every preceding mention, and we're gonna say, are they coreferent? And it's gonna say yes or no. And we're gonna find out that some of them. It says yes for, um, I voted for Nader because he was like, most aligned with my value. She said, if we have a good classifier, it will say yes to the two bu- blue ones and not to the rest of them. Um, and so then we'll have at training time, negative examples that Nader and he are negative examples. [NOISE] So if you have data marked for coreference, we have the sort of positive and negative examples, and we can train a model. And so for training a model, we have a sort of the classifier outcome is one or zero, based on whether two mentions are coreferent. We're gonna have a coreference model that predicts the probability of them being coreferent. And we're gonna train it with the same kind of cross entropy loss, we've used other places and, um, try and learn a model that predicts coreference. And so then when we get to test time, um, and we have a piece of text with mentions, um, we're gonna run this classifier and it's gonna say, um, yes or no, with some probability. And if we pick a threshold like 0.5, we'll add certain coreference links. And that sort of looks pretty good. Um, but we're gonna sort of complete it off by saying well, if A is coreferent to B and B is K coreferent to C. Then really also A is coreferent to C. So we're gonna do a transitive closure, and that will give us our clustering. Um, note here that there's a certain danger in this. Because this means, if we make, since we're sor- with the transitive closure, that's always adding clustering links. And so that means the danger is that we're gonna over cluster, because if we make a single mistake and we link things that should be kept separate. So for example, if we wrongly said, he and my are coreferent, then everything of this, um, discourse would collapse together into one cluster, and everything would be deemed coreferent. Okay, um, and this, something that I haven't really emphasized, but comes up, is well, there's some mentions that are coreferent to nothing, right. In the Shruthi Rao story, there was a park, which was just mentioned once in the text, and so on, in this form of algorithm, what we'd like the classifier to say is, no, no, no, no, no, for all of the decisions. And so it's deemed coreferent to nothing. And then it's just a singleton mention. This sort of works, but it hasn't proven to be the best way of doing coreference. And a lot of the reason why it's not the best way to do coreference is because we have this phenomenon of anaphora where we have textural dependence. A lot of the time, it seems that we're not really, um, what- sort of wanting to make this all coreference decisions. We'd like to make the anaphora decisions of textural dependence. So we'd like to say that he is, um, dependent on Nader and my is dependent on I. These are anaphora relationships. So we'd like to just choose one example of what is this anaphora relationship. And so that's led to people then looking at what is called, um, Mention Pair Models, right? That the problem is that if we have a long document with lots of mentions, um, that we want to not be saying- trying to find all of them and say, yes. We just want to be saying there's a particular- we just want to be saying that there's a particular one. So for the he at the end here, its anaphor relationship is back to Nader and you don't wanna be trying to say this he is also coreferent back to all of these other things that are earlier in the text. So it's not something that's been explored much. But arguably, this is a case again, where you should be separating coreference from anaphors because for anaphors it seems like the right way to think is that they have one prior thing in the text that they're textually dependent on. Whereas true coreferents, when you just have various mentions in the text of Ralph Nader, this Ralph Nader that, Nader did that, those aren't textually dependent and they should all be being grouped together as coreferents. Um, but our models sort of don't normally try and do some one way and some the other way, but you choose one of the models. So in the other one, we do it for- to do the other way, you do what's mention rankings. So for mention ranking, the idea is for each mention, we're going to find- try and find it an antecedent that comes before- before it in the text, that is- that it is, um, coreferent with, and we're going to make a one of N decision. So that when we see she here, we're going to say, "Okay, um, what is this coreferent with?" And we're going to pick one thing that it's coreferent with even though there might be others in the text. Um, so if we're doing that, we then have a problem with singleton mentions because if we're trying to- for every mention we find say, choose the thing that came before it in the text with which it's coreferent, the right answer might be that there's no such thing. So what we do is we add one additional dummy mention right at the front here, the NA mention. So one choice is you're gonna say there isn't anything preceding. So effectively, when you get to I, since this is, um, the first, um, real mention in the text, you're necessarily gonna choose as, um, its antecedent NA. You then go on to Nader and you have two choices. You can either say it's coreferent to I or it's coreferent to NA. I, it's a new mention- a new entity that's being mentioned in the text and the right answer is it's a new mention in- a new entity being mentioned in the text. Then you get to he and now you have three choices, and the right thing is to say that it's coreferent to Nader. Okay. Um, so this time, it's- for training our models, it's sort of the same, um, apart from this, sort of this different one of semantics. So now- previously, we wanted to say that for our, um, mention pair classifier that is going to try and classify I and she, and my and she, and both of them had to get a high score, where now it's sufficient that just one of them gets a high score because that's sort of enough for us to do. So what we're gonna use is our good old softmax and so for she, we're gonna put a softmax over the antecedents. And our hope is simply that we get a high probability with one of the antecedents, if it has an antecedent or a high score with NA, if it doesn't have any prior referents. And so then when we're doing classification at run-time, we're going to sort of add only the highest scoring coreference link. So that means we train it just slightly differently because now what we're going to do is that, when we're- what we're wanting to say is, we want a high score of coreference between at least one of the antecedents. And so one possible model is, we can maximize this probability. So for the ones that are coreferent in the gold standard data, we want the sum of their assigned probabilities to be high. And so what that means is that it's sort of sufficient if we have, um, one of them giving a high probability and they don't all have to give a high probability. So providing it's giving 0.9 probability, say it a one of the correct antecedents, we're getting a high score. Okay. So we're gonna turn that into a loss function in the kind of standard way we do in which we take log probabilities, um, and then we want to, um, or negative log probabilities to give us a loss and then we're wanting to minimize that loss. So with the mention ranking model, um, at test time, it's pretty much the same, but our softmax classifier is just going to assign one antecedent for each mention. And so we're then gonna hope that those sort of give us the kind of clusters that we want and there's no subsequent clustering phase. So there's a big part of this that I left out which was, I've just said, "Okay, we have this probability of MI and MJ as the- are they coreferent?" But I've sort of said, zero as to how you can determine whether they're coreferent or not. Um, so briefly, um, here- here's the classical way of doing it. The classical way of doing it is, you had a whole bunch of features and you had a feature based statistical classifier which gave a score. And these are the kind of features you could use. So there are sort of strong features of person, number, gender agreement. So if you have a masculine or feminine pronoun, you wanna find an appropriate antecedent for it. There are weaker, um, semantic compatibility features. So the mining conglomerate, the company, the conglomerate might be sort of similar to a company. You could use something like word2vec similarity and assess that. There are syntactic constraints. So this is then kind of like, um, what Hobbs's algorithm was all about us working out how likely different syntactic configurations are gonna mean coreference. And indeed it is the case, you know, that a lot of these feature-based systems used Hobbs' algorithm as a feature inside the system that was weighted and was normally a very strong feature to decide coreference. Um, there are lots of other things you can put in as features. Um, recency. So John went to a movie, Jack went as well, he was not busy. The most likely referent for he is the closer candidate Jack. Um, I've mentioned subjects are more likely to be, um, the antecedent. John went to a movie with Jack, he was not busy. Um, John seems a more likely antecedent. So that's the sort of subject preference. There's also a parallelism preference. So John went with Jack to a movie, Joe went with him to a bar. I think it's sort of reasonable to think that him there is probably Jack, and that's sort of for parallelism reasons as opposed to going with the subject. So there are various kind of linguistic features and constraints and so on, and you can throw these all into a statistical classifier and that's sort of 2000s decade coref systems as to how they're built. Um, more recently, people have built neural systems. And so for these, we are kind of normally using the same kind of embeddings. So we'll have a candidate antecedent that will have embeddings, we'll have a mention that has embeddings. And this will be something like average word vectors or something like that for the mention. And we're gonna feed these into a neural network that will give us our score. But what you find is that most of these systems as well as having something like word vectors, they also have additional features, um, and these features still capture some of the things that were in the feature-based statistical classifiers. So there will be often features that reflect things like, what grammatical relation does this mention have? Is it a subject? Is it an object? That's something you could put into the features of a mention. But then, closer things are more likely to be coreferent. So you might have additional features here which record how far apart dimensions are, and those things get thrown in as well. Um, and so these kind of features are still important even in neural systems. And so I'll skip ahead now and show you a bit about, um, what is the kind of current state of the art for coreference resolution, and this was a system that was done at the University of Washington in 2017 by Kenton Lee and assorted other, um, authors. Um, so the goal here was to produce an end-to-end coreference system that it was text in, um, mention clusters that are coreferent out. Um, and so they're wanting to use sort of a more complex neural network that can do the whole thing end-to-end. So I'll go through, um, the steps of that. So the first step is we just start off with words. And so for each word, we're going to look up a word embedding for it and that's in other stuff we've seen. We're also going to put in a character level CNN, and the two of those concatenated are going to give the representation of each token. That much should look familiar. Okay. Then after that, we're going to run a deep bidirectional LSTM back and forth across the sentence. Again, that should look familiar from stuff that we've seen before. Um, the next step gets us a bit into doing something more special, um, For coreference. So what they wanted to do after that is have a representation for spans. And so by span, we mean any contiguous subphrase of the word, of the sentence. So this is a span. This is a span. This is a span. Electric said the postal is a span, every sub-sequence. Um, so I'll come back to that. But, you know, they'll- in principle, you're working this out for every sub-sequence. So for every sub-sequence, they want to come up with a span representation. And so this span representation is going to be in three parts, um, that represent one of these sub-sequences. Um, so each of these will get its own representation. And so the question is, what? And so we have this span representation, and it's gonna be in these three parts here. Um, so what these parts are is, well, first of all, we're going to have a representation, um, which is just looking at the first word of the span and the last word of the span according to the BiLSTM. So if we're looking at the span, the postal service, we're going to take this BiLSTM and this BiLSTM and use them as part of the representation of the span. Um, that's a good start, but then they actually do something a little tricky. So kind of like when we're doing dependency parsing, the idea was, well, phrases are going to have a headword, um, so that if it's, um, you know, my younger sister that the headword of that is sister, and there- if it's something like the goat in the corner of the field, the headword of that is going to be goat. So they want to find a way of capturing headwords out of the text. Um, and so what they're going to do for that is use attention. So they're going to say we have this span, the postal service, and we're going to use attention as a span internal mechanism to sort of approximate a head. So what we're going to do, uh, here, what we're going to do is we're going to want to learn attention weights, I'm just gonna, yeah. Um, what we're gonna do is for this span, um, we're going to be learning based on the hope, the ends of the span which words to pay how much attention to. So we're gonna put attention weights on the different words, and then we're going to, in the usual attention way, make this weighted sum of having put the word pair- the bidirectional LSTM pairs through a feed-forward network and end up with this new representation of a weighted representation. And the hope is that in this case, most of the weight will go on this final servers, which will be the headword. But there'll be sort of distributed across it. And so that gives them a model of sort of mentions that use both ends and hope to find the key word of the mention. Okay. Um, so, um, that's two-thirds of the span, but they still have over here these additional features. And so they still have some additional features. They want to be able to mark speakers and addressees. Um, they want to mark other things like the grammatical role. But if things occur, you know, it is still useful to have some additional features. And so what they do is, this is a representation of each span, and then they're going to want to say are two spans coreferent. And so they're going to have one score for the two, two split, each of two spans, which is essentially saying, is that a good mention? And then you're going to have scores of, do they look coreferent? And so having calculated these representations for each span, you're running three- through things through a fully connected feed-forward network, multiplying by a weight factor, and that's giving you, uh, is that a good mention score? And then for are they coreferent, you're taking two spans, the pointwise Hadamard product of two spans and some extra features like distance apart in the text and putting them through another neural network, and that's then giving you, are these two spans coreferent? But all of these pieces, um, give you an overall loss function. So you can say that your model is, um, okay. We're going to run these LSTMs, we're going to take all spans, we're going to score this, and we know the gold answer for our coreference system. And so we want to be predicting things that are coreferent and have a loss based on the probability that we calculate with these scores, um, as I had mentioned, ranking model using a softmax loss like before. So if you put all of this together and train it end to end, you've got a whole coreference system that goes from words to coreference decisions. Um, there's a huge problem with that, um, which is if you actually applied this naively, well, the problem is the number of spans in a piece of text is the square of the length of the text in words. And so therefore, if you're making coreference decisions, which are between, um, pairs of spans, you've then got an algorithm that's, um, O- OT to the fourth, where the length of the text is T words. So that's sort of really, really computationally impractical. So at this point, they sort of say, well, actually, we do want to use our mouths a little and we want to work out how likely different things are to be mentions. So effectively, um, then they're putting in a lot of pruning to decide which spans are actually things that they want to consider in their model. And so at this point, in some sense, it's a little bit of a cheat, right? Because really this pruning step here is okay, we're going to stick in a mention detection module, um, just like a conventional system. Um, but the prettiness of it is in terms of the algor- in terms of the loss function that's defined. The loss function is really defined end to end from just a sequence of tokens through to the mention ranking decisions. And so it is an end-to-end model, even though in practice to make it practical, you have to have something like a mention detector to get it to work. Okay. Pause for breath. Um, yeah, so there's one last. So we've done sort of mention pair model and mention ranking model. Um, and so for both of those, you're just taking individual mentions and saying, here's another mention, what, what shall I do with it? Let's look at mentions and see if we're coreferent to each other. And that there's no real concept of entities which are clusters of mentions. You're just making these sort of one-off decisions between pairs of mentions, and somehow, sort of the entities as clusters just emerge as a consequence of those mention pair decisions. So there's been this sort of long-standing feeling that, oh that can't really be right, the right way to do coreference must be really to do it as a clustering task, and people often refer to this as saying, we want entities as first-class citizens. So we want to be, sort of putting together mentions into clusters that represent the entities. And the obvious way to do that is to do a kind of bottom-up agglomerative clustering. So you start off by saying, each mention is its own singleton cluster, and then you're making decisions to merge clu- clusters which is initially, um, saying two mentions are coreferent. But as you go on with it, you're then making decisions that two clusters are coreferent or not. So the idea here is you'll have a piece of text, Google recently blah blah blah blah, the company announced Google Plus, blah blah blah blah, the product features blah blah blah blah. And so you have here some mentions. And so what you're going to do is start off saying that okay, there are these four mentions that each their own cluster. And then what we're gonna do, is we're going to make some decisions. Um, so we might decide that these two clusters are coreferent and merge them into one cluster. And then we might decide that these two, um, clusters are coreferent and merge them into one cluster. And so we're progressively clustering. And so then, we're going to look at these two clusters, cluster one and cluster two, and say, no we don't think those ones are coreferent, and therefore we're going to keep them apart. And so your, your coreference algorithm stops when there's nothing left to merge. And the reason why people think that this is the right thing to do is, the feeling is that if we sort of build partial clusters like this, that you'll be able to do a better job. Because if I just sort of say, well here are two mentions, Google and Google Plus, should they be regarded as co- coreferent or not? Um, well, since you're smart human beings, and know what Google is and know what Google Plus is, of course you'll answer no, of course not. Um, but, you know, if you're just a computer trying to make a decision, it's sort of hard to know the right answer, because there are lots of other cases when there are shortenings, where the right answer is that they're coreferent, right. Because if this is being Google and Google Corp, then it would have been right to regard them as coreferent. Or if it was sort of, um, something like Hillary Clinton and Hillary, it would have been right to regard them as coreferent. So it can often be kind of hard to tell what's coreferent. Um, but the hope is that, if you've made some of the easy decisions first, so if you decide Google and the company are coreferent and Google Plus and the product are coreferent, then it should be much easier to tell and to say, well product and company, they're definitely different things. And therefore we should keep these things separate. Um, and so that is the goal, and so to follow that goal, the kind of models people build. And this was actually a model that Kevin Clark is one of the PhD students here, um, and we did a couple of years ago. The idea was well, what we're going to do is, we're initially going to consider mentioned pairs, and build some kind of distributed, mention pair representation, which is kind of similar to what we were doing previously with the previous models. But we're then going to go beyond that and come up with cluster representations. And then we can look at cluster pair representations. And we would hope that by looking at these cluster representations, we'll be able to make better decisions of what to merge or what next to merge. Um, I have a few more slides that go through the Clark and Manning algorithm. Um, but I also have just a few minutes left. And so I think I'll skip the details. Um, I think the main thing that's interesting here, is the idea of clustering based coreference algorithms, and why in principle, it should give you extra oomph. Um, and that's sort of the main useful thing to get through. Because what I want to make sure we have covered in the last few minutes that I've said nothing at all about, is how do you evaluate coreference resolution and how well does it work? So let me skip ahead to that. Um, so if you look at coreference resolution papers, or something like that, um, there are many metrics that people have used to evaluate coreference, and they have a long alphabet soup of names. So there's MUC, and CEAF, and LEA, and B- CUBED, and BLANC and, um, things like that. Um, so effectively part of it is that if you look in the clustering literature, there are lots of ways that people try and evaluate clustering, and essentially any of those metrics and some other ones, you can, um, port over, um, to, um, coreference evaluation. I mean, why it's kind of difficult is the situation you have, is that you have a gold standard which picks out certain clusters, and the system picks out certain clusters, and you get some result like this and you have to decide how good it is. So I'm going to show you just quickly one particular algorithm. So the B-CUBED algorithm uses precision and recall and F-measure like we thought of before. So it looks at, uh, cluster identified by the system. And it says, well this cluster is four-fifths, um, gold cluster one, so the precision is four-fifths. But actually, um, there are six things in gold cluster one. So it only has a recall of four-sixth of that cluster. And then it similarly does for the other one, the same kind of calculation. And then it's going to average across the precisions and recalls, um, and it's going to come up with an overall, um, B-CUBED score. Um, in- if you think about this from an algorithm's perspective, this is actually tricky because I sort of said, um, okay, this cluster is mainly gold cluster one. So use that as its reference, but that means you have to do a bipartite graph alignment between system clusters, and gold clusters. So hidden in- hidden inside this evaluation, um, system is actually an NP-complete problem. But in practice you can normally do it heuristically well enough, that the evaluation method, um, runs and works. Um, okay. And so the kind of thing to notice is that, if you under cluster, you automatically get great precision, but you get bad recall. And if you over cluster, you get- get great recall because everything that should be in the same cluster is, um, but you get terrible precision. And so what you want to be doing is balancing those two things. Okay. Last two minutes, just to give you some idea of performance. So these are results from the OntoNotes dataset which is about 3,000 documents. Chinese, English, labeled for coreference. Um, the scores I'm reporting is actually an average over three metrics. One of which is the one I just showed you for B-CUBED, um, here are some numbers. Um, so Lee et al 2010 was the Stanford system. So there- there was this shared task evaluation of coreference systems. And we believe that Jerry Hobbs, um, was still right, and you could do fine with rule-based coreference. And so in 2010, we managed to beat all machine learning systems with a rule-based coreference system, and we were proud of it. Um, and that's its performance right here. Um, in subsequent years, people did start to do a bit better, um, with, um, with, uh, machine learning systems. But as you see, not very much, right for these 2012 systems that this one's somewhat, better this one really wasn't better, um, this, um, but making a bit of progress. Starting in 2015, there started to be neural systems. Um, so Wiseman et al was sort of the first neural system, I vaguely mentioned this Clark & Manning system, and the numbers are going up into the mid-sixties. And this is the Kenton Lee system that has the end-to-end neural coreference, and on English is getting about 67. So something you'll notice from this, is the numbers aren't great. So coreference is still far from a solved problem. Um, so if you want to have a bit of fun, um, you can go out and try coreference systems for yourself. Um, there's a Stanford one on the first link or the one from Hugging Face is a good modern coreference system as well. And if you just try these out with some pieces of text, you'll notice they still get lots of things wrong. Um, so there's still more work to do, because this is just a harder language understanding task, [NOISE] which is just kind of like, um, Jerry Hobbs and Terry- Terry Winograd earlier observed. Okay, um, but I'll stop there for now. Thanks a lot. Um, oh yeah, I should have a reminder, invited speaker next Tuesday. Um, so I'll be taking, um, attendance for invited speakers.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_10_Question_Answering.txt
Okay. Hi, everyone. Um, so let's get started again today. So today's lecture what I'm going to do, is be talking about, um, question answering over text. Um, this is another of the big successes in using deep learning inside natural language processing, and it's also a technology that has some really obvious commercial uses. So it's an, it's an area that has attracted a lot of attention in the last couple of years. So this is the overall plan. Um, just a couple of reminders and things at the beginning about final project stuff, and then we'll, basically all of it is talking about question-answering starting with, um, motivation history, um, talking about the SQuAD data, uh, a particular simple model, our Stanford Attentive Reader. Then talking about some other more complex, um, stuff into the most modern stuff. Um, yeah, so in a census, um, lecture serves a double purpose because if you're going to do the, the default final project, well, it's about textual question-answering, and this is your chance to learn something about the area of textual question-answering, and the kinds of models you might want to be thinking about and building. Um but the content of this lecture pretty much is in no way specifically tied to the default final project, apart from by subject matter that really it's telling you about how people use neural nets to build question-answering systems. Okay. So first just quickly on the reminders, um, mid-quarter survey. I mean, a huge number of people, um, have actually filled this in already. Uh, we already had over 60 percent, um, um, filling-it-in rate by which by the standards of people who do surveys they come as a huge success already. But if you're not in that percent, um, we'd still love to have your feedback and now's the perfect time to do it. Um, yeah. I just wanted to sort of have a note on custom final projects. Um, so in general, um, it's great to get feedback on custom final projects. There's a formal mechanism for that which is the project proposal that I mentioned last time. It's also great to chat to people, um, informally about, um, final projects. And so I'm one of those people and I have been talking to lots of people about final projects, and, uh, very happy to do so. But there's sort of a problem that there's only one of me. Um, so I do also, um, encourage you to realize that among the various TAs that really lots of them have had experience of different deep learning projects, and in particular on the office hours page, there's a table that's like this but you can read it if you look at it on your own laptop, which talks about the experience of different TA's. And many of them have experience in different areas, and many of them are also good people to talk to about final projects. Okay. Um, so for the default final project, the textual question-answering. So um, draft materials for that app today, um, right now on the website actually. Um, we're calling them draft because we think that there are still probably a few things that are gonna get changed over the next week, so um, don't regard as completely final in terms of the code that, you know, it's sort of 90 percent final. So in terms of deciding whether you're going to do, um, a custom final project or a default final project, and working out what you're putting into your project proposal. Um, it should be, you know, well more than, um, what you need for this year. Okay. The one other, um, final bit I just wanted to say that I didn't get to last time is so for the final projects, regardless of which kind you're doing, um, well, part of it is, um, doing some experiments, of doing stuff with data and code, and getting some numbers and things like that. But I do really, um, encourage people to also remember that an important part of the final project is writing a final project report. And this is no different to any research project of the kinds that, um, students do for conferences or journals and things like that, right? You spend months commonly working over your code and experiments. But in most cases, the main evaluation of your work is from people reading, a written paper output version of things. So it's really important that, that paper version sort of reflects the work that you did and the interesting ideas that you came up with, and explains them well and present your experiments, and all of those things. And so we encourage you to sort of do a good job at writing up your projects. Um, here is just sort of a vague outline of, you know, what a typical project write-up is likely to look like. Now, there isn't really one size completely fits all because depending on what you've done different things might be appropriate. But, you know, typically the first page, you'll have an abstract for the paper and the introduction to the paper. You'll spend some time talking about related prior work. Um, you'll talk about what kind of models you built for a while. Um, there's probably some discussion of what data you are using for your projects. Um, experiments commonly with some tables and figures about the things that you're doing. Um, more tables and figures talking about the results as to how well your systems work. Um, it's great to have some error analysis to see what kind of things that you got right and wrong, and then maybe at the end there's sort of plans for the future, conclusions, or something like that. Okay. Um, that's sort of it for my extra administrative reminders. Um, are there any questions on final projects that people are dying to know? [NOISE] Okay. Good luck. I just meant to say good luck. Yeah. Good luck with your final projects. [LAUGHTER] Okay. So now moving into, um, yeah, the question answering. Okay. So, I mean- so question answering is a very direct application for something that human beings, um, want to do. Um, well, maybe human beings don't in general want to know this. Um, here's my query of "Who was Australia's third prime minister?". Um, maybe, yeah, that's not really the kind of thing you're gonna put into your queries but, you know, maybe you query, "Who was the lead singer of Big Thief?" or something like that. I don't know. Um, you're, uh, but you know, lots- a large percentage of stuff [NOISE] on the web is that people actually are asking for answers to questions. And so, if I put in this query into Google, it actually just works. It tells me the answer is John Christian Watson. And, um, so that's sort of question answering working in the real world. Um, if you try different kinds of questions in Google, you'll find that some of them work and lots of them don't work. And when they don't work, you're just sort of getting whatever kind of information retrieval, web search results. Um, there is one fine point that I just wanted, um, to mention down here. So another thing that Google has is the Google Knowledge Graph, which is a structured graph representation of knowledge. And some kinds of questions, um, being answered from that structured knowledge representation. And so, I mean, quite a lot of the time for things like movies, it's coming from that structured graph. If you're sort of saying, "Who's the director of a movie?" or something like that. But this answer isn't coming from that. This answer is a genuine, the kind of stuff we're gonna talk about today. It's textual question answering from a web page where Google's question and answering system has extracted the answer and is sticking it up there. Um, if you're, um, wanting to explore these things, um, if you get one of these boxes sort of down here where I've cut it off, there's a little bit of gray that says, "How did I get this result?". And if you click on that, it actually tells you what source it's getting it from and you can see if it's doing it from the textual question answering system or from something like the Knowledge Graph. Okay. Um, so the- in general, the motivation for question answering is that these days there's just these sort of massive collections of full text documents, i.e., there's the web. Um, so that there are sort of billions of documents of information. And traditionally, when people first started thinking about search information retrieval as a field, you know, nothing of that kind of quantity and size existed, right? That when people first started building search systems, it was sort of unthinkable to index whole documents because no one had hard disks big enough in those days, right? That really- they were indexing titles or titles and abstracts or something like that. And so, it seemed perfectly adequate in those days to say, "Okay. We're just gonna send you- give you your results." as to "Here's a list of documents." because the documents are only a hundred words long. But that's clearly not the case now when we have the sort of, you know, ten minute read, Medium posts um, which might have the answer to a question. And so, there's this need to sort of say, "Well, can we just have systems that will give us answers to questions?". And a lot of the recent changes in technology have hugely underlined that need. So, returning documents works okay if you're sitting at your laptop, but it works really terribly if you're on your phone and it works even more terribly if you're trying to work with speech on a digital assistant device, something like an Alexa system. And so, we really want to actually be able to produce systems that can give the answers to people's questions. And so typically, doing that is factored into two parts. That the first part of that is we still do information retrieval. We use stand- normally quite standard information retrieval techniques to find documents that quite likely to con- maintain- contain an answer. And the reason that this is normally done by quite traditional techniques is because the traditional techniques are extremely scalable over billions of documents, whereas current neural systems actually aren't really scalable over billions of documents. But that's an area in sort of which research is ongoing. But then once we have sort of some candidate likely documents, we want to find, uh, do they contain an answer, and if so, what is the answer? And so at that point, we have a document or a paragraph, and we're saying, "Can we answer this question from there?" And then that problem is often referred to as the Reading Comprehension problem. And so that's really what I'm gonna focus on today. Um, Reading Comprehension isn't a new problem. I mean it- you can trace it back into the early days of artificial intelligence and NLP. So, back in the 70's, a lot of NLP work was trying to do Reading Comprehension. I mean one of the famous strands of that, um, was, um, Sir Roger Shank was a famous, um, early NLP person. Though not a terribly nice man. I don't think, actually. Um, but the Yale School of AI was a very well-known, um, NLP approach and really, it was very focused on Reading Comprehension. Um, but it's sort of, you know, I think it was sort of the time, it was too early in any way. It sort of died out. Nothing much came out of that. Um, but then in- right just before the turn of the mil- millennium, Lynette Hirschman revived this idea and said, "Well, maybe a good challenge would be to find the kind of Reading Comprehension questions that elementary school kids do, and let's see if we could get, um, computers to do that. And some people tried that with fairly simple methods, which only work mediocrely. Then sort of somewhat after that, um, Chris Burges who was a guy who was at Microsoft Research and he wasn't really an NLP person at all. He was a machine learning person, but he got it into his head, um, that while really a big problem that should be being worked on is Machine Comprehension and he suggested that you sort of could codify it like this. And this is a particular clean codification that has lived on and we'll look at more today. All right. So, a machine comprehends a passage of text. If there's any question regarding that text that can be answered correctly by a majority of native speakers, that machine can provide a string, which those speakers would agree both answers that question and does not contain information irrelevant to that question. Um, and he sort of proposed this as sort of a challenge problem for artificial intelligence and set about collecting a corpus, the MCTest corpus, which was meant to be a simple Reading Comprehension challenge. Um, so they collected, um, stories, um, which, um, were meant to be kids' stories, you know. "Alyssa got to the beach after a long trip. She's from Charlotte. She traveled from Atlanta. She's now in Miami". Sort of pretty easy stuff. And then there were questions. "Why did Alyssa go to Miami?" Um, and then the answer is, "To visit some friends". And so you've got there this string that is coming from the passage. That's the answer to the question. Um, so the MCTest is a corpus of about 600 such stories and that challenge existed, and a few people worked on it. But that never really went very far either for the next couple of years. But what really changed things was that in 2015, and then with more stuff in 2016, um, deep learning people got interested in this idea of, "Could we perhaps build neural question answering systems?" And it seemed like if you wanted to do that, um, something like MCTest could only be a test set and the ways to make progress would be to do what had been done in other domains and to actually build just- hand build a large training set of passages, questions, and answers in such a way that would be able to train neural networks using the kind of supervised learning techniques that we've concentrated on so far in this class. And indeed, the kind of supervised neural network learning techniques, which is [NOISE] actually the successful stuff that powers nearly all the applications of deep learning, not only in NLP, but also in other fields like vision. Um, and so the first subs- the first such dataset was built by people at DeepMind over CNN and Daily Mail news stories. Um, but then the next year, um, Pranav Rajpurkar is a Stanford PhD student working with Percy Liang and a couple of other students, um, produced the SQuAD dataset, which was actually a much better designed dataset and proved to be sort of much more successful at driving this forward. And then following along from that, other people started to produce lots of other, um, question answering datasets which, you know, many of them have interesting advantages and disadvantages of their own including MS MARCO, TriviaQA, RACE, blah, blah, blah, lots of them. Um, but for today's class, I'm gonna concentrate on SQuAD, because SQuAD is actually the one that has been by far the most widely used. And because it - it was just a well-constructed clean dataset, that it sort of just proved a profitable one for people to work with. [NOISE] Okay. Um, so, that was reading comprehension. I'll also just quickly tell you the, um, the history of open domain question answering. So, the difference here for the- the field of Open-domain Question Answering that we're saying, okay, there's an encyclopedia or there's a web crawl, I'm just going to ask a question, can you answer it? So, it's this bigger task of question answering. And, you know, that was something that again was thought about, um, very early on. So, there's this kind of early, um, CACM paper by Simmons who sort of explores how you could do answering questions as textual question-answering, um, and yet, you know, he has the idea that what's going to happen is you're gonna dependency parse the question, and dependency parse sentences of the text, and then sort of do tree matching over the dependency parses, um, to get out the answers. And, you know, that's in some sense actually prefigured work that people actually were then attempting to do 35 years later. Um, getting a bit more modern, um, Julian Kupiec, she was working at Xerox PARC at the time, um, came up with this system called MURAX, and so at this stage in the 90s there started to be the first, um, digitally available encyclopedias available, so he was using the Grolier's Encyclopedia, and so he said about trying to build a system that could answer questions over that encyclopedia using, in general, fairly sort of shallow, um, linguistic processing methods, i.e, regular expressions. Um, for, after [LAUGHTER] having, um, done information retrieval search over that. But that started to evoke more interest from other people, and so in 1999 the US National Institutes of Standards and Technology, um, instituted a TREC question-answering track where the idea was, there was a large collection of News-wire documents, and you could be asked to provide the question of them, and lots of people started to build question answering systems. Indeed, if in some sense that was this competition which was where people at IBM started, um, working on textual question-answering, and then, um, sort of a decade later, um, IBM rejigged things into the sexier format of, um, let's build a Jeopardy contestant rather than let's answer questions from the news, and that then led to their DeepQA system in 2011. Which I presume quite a few of you saw, these people saw Jeopardy IBM? Yeah, some of you. Okay. So, that they were able to successfully, um, build a question answering system that could compete at Jeopardy, um, and win. Um, and, you know, like a lot of these demonstrations of technological success there are things you can quibble about the way it was set up, um, that really the kind of computer just had a speed advantage versus the human beings that had to buzz in to answer the question. But, you know, nevertheless, fundamentally, the textual question-answering had to work, that this was a system that was answering questions mainly based on textual passages, and it had to be able to find the answers to those questions correctly, for the system to work. Um, so then, more recently again, um, and really the first piece of work that did this with a neural system was, um, work that was, um, done by a Stanford PhD student, that I'll get to later, was then the idea of well, could we replace traditional complex question answering systems by using a neural reading comprehension system, and that's proved to be very successful. So, to, to explain that a little bit more, um, if you look at the kind of systems that were built for TREC question-answering, um, they were very complex multi-part systems. And really, if you then look at something like, IBM's Deep QA system it was sort of like this times 10 because it both had very complex systems like this, but it ensembled together sort of six different components in every place, and then did sort of, um, classify a combination on top of them. But so far, the current-. This is sort of around a sort of a 2003 question answering system, and so the kind of things that went through is, so when there was a question, it parsed the question with a parser kind of like the ones we saw with our dependency parsers. It did some sort of handwritten semantic normalization rules to try and get them into a better semantic form. It then had a question type classifier which tried to work out what kind of semantic type is this question looking for, is it looking for a person name, or a country name, or a temperature, or something like that. Um, it would, um, then, um, have an information retrieval system out of the document collection, um, which would find paragraphs that were likely to contain the answers. Um, and then it would have a method of ranking those paragraph choices to see which ones are likely to have the answers. Um, it would then, um, over there somewhere, um, run Named Entity Recognition on those passages to find entities that were in them. These systems depended strongly on the use of fine matching entities because then it could look for an entity which corresponded to the question type. Um, then once it had candidate entities, it had to actually try and determine whether these entities did or didn't answer the question. So, these people, this is the system from LCC by, um, Sanda Harabagiu and Dan Moldovan. They actually had some quite interesting stuff here, where they had a kind of a loose theorem prover that would try and prove that, um, the semantic form of a piece of text, um, gave an answer to what the question was. So, you know, that was kind of cool stuff with an Axiomatic Knowledge Base, um, and eventually out would come an answer. Um, so, you know, something that is, I do just want to emphasize, you know, sometimes with these deep learning courses you get these days, the impression you have is that absolutely nothing worked before 2014, uh, when we got back to deep learning, and that's not actually true. So, these kind of factoid question on, these kind of question answering systems within a certain domain actually really worked rather well. Um, so, I started saying the word Factoid Question Answering, and so let me explain that because that's the secret. So, people, at least in NLP, use the term "Factoid Question Answering" to mean the case that your answer is a named entity. So, it's sort of something like, you know, what year was Elvis Presley born, or what is the name of Beyonce's husband, or, um, you know, which state, um, has the most pork or something, I don't know. Right, anything that's got, anything that's sort of the answer is sort of some clear semantic type entity, and that's your answer. I mean, so, within the space of those kind of questions, which actually is a significant part of the questions you get in web search, right? Lots of web search is just, you know, who was the star of this movie, or what year was somebody born, right? There's zillions of those all the time. These systems actually really did work quite well that they could get about 70 percent of those questions right, um, which wasn't bad at all, um, though that they really sort of didn't really extend it out to other kinds of stuff beyond that. But whatever virtues they had, um, they were extremely complex systems that people spent years put togeth- putting together, which had many components with a huge amount of hand-built stuff. And most of the stuff was sort of built quite separately and tied together, and you just sort of hope that it worked, um, well, when put together in composite. And so we can contrast that to what we then see later, um, for neural network-style systems. Okay. Um, so let me now say some more stuff about, um, the Stanford Question Answering Dataset or SQuAD that I just mentioned a little bit ago, and as this is the data for the default final project as well. Um, so what SQuAD has is, questions in SQuAD have a passage, which is a paragraph from Wikipedia. And then there is a question, here it's, "Which team won Super Bowl 50?" And the goal of the system is to come up with the answer to this question. Um, human reading comprehension. What is the answer to the question? [NOISE] Broncos. Broncos. [LAUGHTER] Okay. Yeah. Um, so that's the answer to the question. Um, and so by construction for SQuAD, the answer to a question is always a sub-sequence of words from the passage which is, normally, it ends up being referred to as a span, a sub-sequence of words from the passage. So that's the only kind of questions you can have. You can't have questions that are counting questions, or yes, no questions, or anything like that. You can just pick out a sub-sequence. Um, okay. But, um, so they created in the first version about 100,000 examples. So there are a bunch of questions about each passage. So it's sort of something like, um, I think it's maybe sort of about five questions per passage, and there are 20,000 different bits that Wikipedia uses, used. Um, and this sort of must be a span form, as often referred to as extractive question answering. Okay. Um, here's just one more example that can give you some more sense of some of the things that are there, and it illustrates a couple of other factors. Um, so, you know, even this one, I guess the previous one wasn't, um, completely obvious what your answers should be because maybe you could say the answer should just have been Broncos, or you could have said it was Denver Broncos. Um, and in general, even if you're answering with a span, there's gonna be variation as to how long a span you choose. Um, so what they did, um, and so this was done with, on Mechanical Turk, gathering the data, or building questions, and getting answers, is that they got answers from three different people. So here's this question, "Along with non-governmental and non-state schools, what is another name for private schools?" And three human beings were asked the answer based on this passage. And one said independent, and two said independent schools. Um, this one, all three people gave the same answer. This one, again, you get two different answers, so that they sample three answers. And basically, then, you can be correct if you're going with any of the answers. And so that sort of at least gives you a bit of robustness to variation in human answers. Okay. And that starts me into the topic of evaluation. Um, yeah. And these slides here are entitled SQuAD version 1.1 because that means in five minutes time, I'm gonna tell you about SQuAD version 2, which adds a bit more stuff into it, but we'll just get 1.1 straight first. All right. So there are three answers that col- were collected. And so for evaluation metrics, they suggested two evaluation metrics. The first one is exact match. So you're going to return a span. If the span is one of these three, you get one point, and if the scan, span is not one of these three, you get zero for that question. And then your accuracy is just the percent correct, so that's extremely simple. But the second metric, and actually, the one that was favored as the primary metric, was an F1 metric. So what you do for this F1 metric is you're matching at the word level for the different answers. So you've treat each, you treat the system span and each gold answer as a bag of words, and then you work out a precision, which is, um, the percent of words in the system's answer that are actually in a span, i- in a gold span, the recall, which is the percent of words in a gold span that are in the system's span. And then you calculate the harmonic mean of those two numbers and the harmonic mean is sort of a very conservative average. So it's close to the mean of those two numbers, and that gives you a score. And what you then do is, for each question, you'd return, you say its score is the maximum F1 over the three different answers that were collected from human beings. And then for the whole, um, dataset, you then average those F1 scores across questions and that's then your final F1 result. So that's a more complicated thing to say. Um, and we provide there sort of a val code, um, for you that does that. Um, but it sort of seems that F1 is actually a more reliable and better measure because if you use exact match, you know, even though there's of, a bit of robustness that comes on three people's answers, three is not a very large sample, so there's sort of a bit of guessing as to whether you get exactly the same span some human being got, whereas you're sort of going to get a reasonable score in the F1 even if your boundaries are off by a little. So the F1 metric sort of, um, is more reliable and avoids various kinds of artifacts as to how big or small an answer human beings tend to choose in some circumstances. Um, and so that's sort of being used as the primary metric that people score people on in the leader boards. Um, final detail, both metrics, um, ignore punctuation and the English articles a, an, the. Okay. Um, so how did things work out? Um, so for SQuAD version 1.1, um. A long time ago, at the end of 2016, um, this is how the leaderboard looked. Um, this is the bottom of the leaderboard at this point in time because that allows me to show you a couple of things. So down at the bottom of the leaderboard, um, so they tested how well human beings did, um, at answering these questions because you know, human beings aren't perfect at answering questions either. Um, and so the human performance that they measured, um, had an F1 score of 91.2. And I'll come back to that again in a minute. Um, and so when they built the dataset, they built a logistic regression baseline which was sort of a conventional NLP system. So, they dependency parsed the question and sentences of the answer. They looked for dependency. So dependency link matches, so a word at both ends with the dependency relation in between and count and matches of those and sort of pointing to a likely answer. Um, so as sort of a fairly competently built traditional NLP system of it's not as complex as but it's sort of in the same vein of that early question answering system I mentioned. And it got an F1 of about 51. So not hopeless, um, but not that great compared to human beings. And so, very shortly after that, um, people then started building neural network systems to try and do better at this task on this dataset. And so, one of the first people to do this quite successfully, um, were these people from Singapore Management University, maybe not the first place you would have thought of but, um, they were really sort of the first people who showed that, yes, you could build an end-to-end trained neural network for this task and do rather better. And so, they got up to 67 F1. Um, and well, then they had a second system. They got 70 and then things started, um, to, um, go on. So that even by, um, the end of 2016, um, there started to be systems that really worked rather well on this task. Um, so here, this time was the, um, top of the leaderboard. So I'll talk later about this BiDAF system from, uh, the AI to, Allen Institute for Artificial Intelligence and the University of Washington. So, it was getting to 77 as a single system that like in just about all machine learning, people pretty soon noticed that if you made an ensemble of identically structured systems, you could push the number higher and so if you ensemble those, you could then get another sort of whatever it is about four points and get up to 81, um, F1. And so this was sort of around the situation when in the, uh, 2017, um, 224N class, we first used SQuAD version one as jus- as a default final project. And at that point, you know, actually the best students got almost to the top of this leaderboard. So our best, um, CS224N Final Project in winter 2017 made it into, um, the equivalent of fourth place on this leaderboard, um, with 77.5 as their score. So that was really rather cool. Um, but that's a couple of years ago and since then, people have started building, um, bigger and bigger and more and more complex, um, systems. And, um, so essentially, you could sort of say that SQuAD version one is basically solved. So the very best systems are now getting F1 scores that are in the low 90s and in particular, you can see that the best couple of, um, systems have higher F1s and well higher exact matches than what was measured for human beings. Uh, but like a lot of the claims of deep learning being better and performing from human being, than human beings, there's sort of some asterisks you can put after that. I mean, in particular for this dataset, the way they measured human performance was a little bit unfair because they only actually collected three human beings' answers. So, to judge, um, the human performance, the hu- those hu- each of those humans was being scored versus only two other humans. And so, that means you only had two chances to match instead of three. So, there's actually sort of a systematic underscoring of the human performance. But whatever, systems got very good at doing this. Um, so the next step, um, was then to introduce, uh, the SQuAD vers- version 2 task. And so many people felt that a defect of SQuAD version 1 was that in all cases, questions had answers. So, that you just had to find the answer in the paragraph, um, and so that's sort of turned into a kind of a ranking task. You just had to work out what seems the most likely answer. I'll return that without really having any idea whether it was an answer to the question or not. And so, for SQuAD version two, for the dev and test sets, half of the questions have answers and half of the questions just don't have an answer in the passage, um, it's slightly different distribution, the training data. Um, and the way it works for scoring is the sort of, like, the no answer kind of counts as like one word as a sort of a special token. So, if it's, if it should be a no answer and you say no answer, you get a score of one on the either exact match or the F-measure. And if you don't do that, you get a score of zero. Um, and so, the simplest way of approaching SQuAD 2.0 would be to say, well, rather than just always returning the best match in my system, I'll use some kind of threshold and only if the score is above a threshold, our counters and answer. You could do more sophisticated things. So another area that we've worked on quite a bit at Stanford is this natural language inference task that I'll talk about later in the course. Um, but that's really about saying whether one piece of, um, text is the conclusion of another, um, piece of text. And so that's sort of a way that you can try and see whether, uh, a piece of text actually gives you a justification and answer to what the question was. But at any rate, this trying to decide whether you've actually got an answer or not is a quite difficult problem in many cases. So here's an example from SQuAD, um, 2.0. So Genghis Khan united the Mongol and Turkic tribes of the steppes and became Great Khan in 1206. He and his successors expanded the Mongol Empire across Asia, blah, blah, blah, blah. And the question is, when did Genghis Khan kill Great Khan? And the answer to that is, you know, uh, there isn't an answer because actually, Genghis Khan was a person named Great Khan and he didn't kill a Great Khan. It's just not a question with an answer. Um, but it's precisely what happens with systems is, you know, even though these systems get high scores in terms of points, they don't actually understand human language that well. So they look at something that says, when did Genghis Khan kill Great Khan? Well, this is something that's looking for a date and there are some obvious dates in this passage there's 1206, 1234, 1251 and well, there's kill, and kill looks a little bit similar to destroyed. I can see the word destroyed. So that probably kind of matches. And then we're talking about, um, Genghis Khan and there, I can see Genghis and Khan in this passage. And so it sort of puts that together and says 1234 is the answer when that isn't the answer at all. And that's actually kind of pretty typical of the behavior of these systems. And so that, on the one hand, they work great. On the other hand, they don't actually understand that much, and effectively asking whether there's, this question is actually answered in the passage is a way of revealing the extent to which these models do or don't understand what's actually going on. Okay. So, at the time, um, they built SQuAD version 2.0. They took some of, um, the existing SQuAD version one's systems, and, um, modified them in a very simple way. I put in a threshold, um, score as to how good the final match was deemed to be, and said, Well, how well do you do on SQuAD 2.0? And the kind of systems that we saw doing well before, now didn't do that well, so something like the BiDAF system that we mentioned before was now scoring about 62 F1, so that that was sort of hugely lowering its performance and reflecting the limits of understanding. Um, but it turned out actually that this problem didn't prove to be q- quite as difficult as the dataset authors, um, maybe thought either. Um, because it turns out that um, here we are now in February 2019, and if you look at the top of the leaderboard, we're kind of getting close again to the point where the best systems are almost as good as human beings. So, um, the current top rate system there you can see is getting 87.6 F1, which is less than two points behind where the human beings are. Um, the SQuAD version 2 they also co- corrected the, um, scoring of human beings, so it's more of a fair evaluation this time, um, so there's still a bit of a gap but, you know, the systems are actually doing, um, really well. And the interesting thing there is, you know, on the one hand these systems are impressively good. Um, you can go on the SQuAD website and look at the output of several of the good systems, and you can see that there are just a ton of things that they get right. They're absolutely not bad systems. You have to be a good system to be getting five out of six of the questions right. Um, but, you know, on the other hand they still make quite elementary Natural Language Understanding Errors. And so here's an example of one of those. Okay, so this one, the Yuan dynasty is considered both a successor to the Mongol Empire and an imperial Chinese dynasty. It was the khanate ruled by the successors of Mongke Khan after the division of the Mongol Empire. In official Chinese histories the Yuan dynasty bore the Mandate of Heaven, following the Song dynasty and preceding the Ming dynasty. Okay. And then the question is, what dynasty came before the Yuan? And that's a pretty easy question, I'd hope, for a human being. Everyone can answer that question? Okay, um, yeah, so it says in official Chinese histories Yuan Dynast- uh, sorry the next sentence. Um, yeah followed- right the Yuan Dynasty following the Song dynasty and preceding the Ming dynasty. But, you know actually um, this sort of the leading um, Google BERT model says that it was the Ming dynasty that came before the Yuan Dynasty which you know is sort of elementarily wrong that reveals some of the same kind of it's not really understanding everything but it's doing a sort of a matching problem still. Okay. So, this SQuAD dataset has been useful and good. It still has some major limitations and I just thought I'd mentioned what a few of those are so you're aware of some of the issues. So one of them I've already mentioned, right, that you're in this space where all answers are a span from the passage. And that just limits the kind of questions you can ask and the kind of difficult situations there can be. So, there can't be yes-no questions counting questions or even any of the sort of more difficult implicit questions. So, if you think back to when you were in middle school and did reading comprehension, I mean, it wasn't typically um, the case um, that you're being asked questions that were just stated explicitly in the text of, you know, Sue is visiting her mother in Miami. And the question was, who was visiting in Miami? That wasn't the kind of questions you were asked you were normally asked questions um, like um, you know, um, Sue is going to a job interview this morning, um, it's a really important job interview for her future. At breakfast she um, starts buttering both sides of her piece of toast um, and you are asked a question like, um, why um, is Sue buttering both sides of her piece of toast? And you're meant to be able to answer, "She's distracted by her important job interview coming up later in the day." Which isn't the- something that you can answer um, by just picking out a sub span. Um, a second problem which is sort of actually a bigger problem is um, the way SQuAD was constructed for ease and not to be too expensive and various other reasons was um, paragraphs of Wikipedia were selected and then, Mechanical Turkers were hired to say, "Come up with some questions um, that can be answered by this this passage in version 1.1." And then in version two they were said- told, "Also come up with some questions that look like they're related to this passage but aren't actually answered in the passage." But, in all cases people were coming up with the questions staring at the passage and if you do that, it means that your questions are strongly overlapping with the passage both in terms of the, the words that are used and even the syntactic structures that are used for your questions tending to match the syntactic structures of the passage. And so that makes question answering um, naturally easy. What happens in the real world, is this human beings think up questions and type something into a search engine and the way that they type it in is completely distinct from the way something might be worded on a website. So that they might be saying something like, you know, "In what year did the price of hard disks drop below a dollar a megabyte?" Um, and the webpage will say something like the cost of hard disks has being dropping for many years um, in I know whenever it was 2004 prices eventually crossed um, the dollar megabyte barrier or something like that. But there's a quite different discussion of the ideas. And that kinda matching is much harder and that's one of the things that people have done other datasets have tried to do differently. Um, another limitation is that these questions and answers are very much, find the sentence that's addressing the fact, match your question to the sentence, return the right thing, that there's nothing sort of more difficult than involves multi sentence, combine facts together styles of inferencing, that the limits of cross sentence stuff there is pretty much limited to resolving co-reference which is something we'll talk about later in the class, that means that you see a he or she or an it, and you can work out who that refers to earlier in the, this course. Um, nevertheless, despite all those disadvantages, it sort of proved that SQuAD was, you know, well-targeted in terms of its level of difficulty, well-structured, clean dataset, and it's just been sort of everybody's favorite for a question answering dataset. It also seems to have proved that actually for people who work in industry and want to build a question answering system, starting off by training a model in SQuAD, actually turns out to work pretty well it turns out. I mean, it's not everything you want to do. You definitely wanna have relevant in domain data and be using that as well, but you know, it turns out that it seems to actually be a quite useful starting point. Okay. So, what I wanted to show you now was a- is a concrete, simple, neural question answering system. Um, and this is the model that was built by here and I guess she was sort of an Abby predecessor since she was the preceding head TA for CS 224N. Um, so this system, um, Stanford Attentive Reader it kind of gets called now. I mean, this is sort of essentially the simplest neural question answering system that works pretty well. So, it's not a bad thing to have in mind as a baseline and it's not the current state of the art by any means. But you know, if you're sort of wondering what's the simplest thing that I can build that basically works as a question answering system decently, this is basically it. Um, okay. So how does this work? So the way it works is like this. So, first of all, we have a question which team won Super Bowl 50? And what we're gonna wanna do is build a representation of a question as a vector. And the way we can do that is like this, for each word in the question, we look up a word embedding. So, in particular it used GloVe- GloVe 300 dimensional word embeddings. Um, we then run an LSTM forward through the question and then kind of like Abby talked about, we actually make it a bi-LSTM. So, we run a second LSTM backwards through the question. And so then, we grab the end state of both LSTMs and we simply concatenate them together into a vector of dimension 2D if, if our hidden states of the LSTM are dimension d and we say that is the representation of the question. Okay. So, once we have that, we then start looking at the passage. And so, for the start of dealing with the passage, we do the same thing. We, um, look up a word vector for every word in the passage and we run a bidirectional LSTM, now being represented a bit more compactly um, across the passage. But then we have to do a little bit more work because we actually have to find the answer in the passage. And so what we're gonna do is use the question representation to sort of work out where the answer is using attention. So this is a different use of attention to machine translation. That kind of attention equations are still exactly the same. But we've now got this sort of one question vector that we gonna be trying to match against to return the answer. So, what we do is we, um, work out an attention score between each word's bi-LSTM representation and the question. And so the way that's being done is we're using this bi-linear attention, um, that um, Abby briefly discussed and we'll see more of today. We've got the question vector, the vector for a particular position in the passage to the two concatenated LSTM hidden states. So they're the same dimensionality. We have this intervening learn W matrix. So, we work out that quantity, um, for each position, and then we put that through a softmax which will give us probabilities over the different words in the passage. Um, and those give us, um, our attention weights. And so at that point we have attention weights, um, for different positions, um, in the passage and we just declare that, um, that is where, um, the answer starts. Um, and then to get the end of the answer, we simply do exactly the same thing again apart from we train a different W matrix here, and we have that, um, predict the end token. And there's something a little bit subtle here. Um, because, you know, really we're asking it to sort of predict the starts and the ends of the answer, and you might think, but wait a minute. Surely, we need to look at the middle of the answer as well because maybe the, the most indicative words are actually going to be in the middle of the answer. Um, but, you know, really really what we're, we're sort of implicitly telling the model of well, when you're training, if there's stuff in the middle that's useful, it's the bi-LSTM's job to push it to the extremes of the span, so that this simple bi-linear attention will be able to get a big score at the start of the span. And you might also think there's something funny that this equation and that equation are exactly the same. So, how come one of them is meant to know it's picking up beginning, um, and the other at the end? And again, you know, we're not doing anything to impose that. We're just saying, neural network. It is your job to learn. Um, you have to learn a matrix here and a different one over there, so that one of them will pick out parts of the representation that indicate starts of answer spans and the other one ends of answer spans. And so, that will then again pressure the neural network to sort of self organize itself in such a way that there'll be some parts of this hidden representation that will be good at learning starts of spans. You know, maybe there'll be carried backwards by the backwards LSTM and and some parts of it will be good at learning where the spans end and then the W matrix will be able to pick out those parts of the representation. Um, but yeah, uh, that's the system. Um, yeah. So, um, so this is the basic Stanford Attentive Reader model and it's just no more complex than that. Um, and the interesting thing is, you know, that very simple model actually works nicely well. Um, so this is going back in time. Again, this was the February 2017 SQuAD version 1 leaderboard. Um, but at that time, that provide- like, it always in neural networks quite a bit of your success is training your hyperparameters and optimizing your model really well. And some time, you know, it's been repeatedly proven in neural network land that often you can get much better scores than you would think from very simple models if you optimize them really well. So there have been multiple cycles in sort of deep learning research where there was a paper that did something and then the next person says, "Here's a more- more- more complex model that works better," and then someone else published a paper saying, "Here's an even more complex than that model that works better," and then someone points out, "No. If you go back to the first model and just really train its hyperparameters well, you can beat both of those two models." And that was effectively the case about what was happening with the Stanford Attentive Reader. That, you know, back in- back in February 2017, if you just train this model really well, it could actually outperform most of the early SQuAD systems. I mean, in particular, it could outperform, um, the BiDAF, the version of BiDAF that was around in early 2017 and, you know, various of these other systems from other people. But it was actually, at that time, it was pretty close to the best system that anyone had built. Um, as I've already pointed out to you, um, the numbers have gone up a lot since then. So I'm not claiming that, um, this system is still as good as the best systems that you can build. But there you go. Um, so that's the simple system that already works pretty well, but of course you want this system to work better. Um, and so Danqi did quite a bit of work on that. And so here I'll just mention a few things for, um, Stanford Attentive Reader++ as to what kind of things can you do to make the model better. And so here's a sort of a picture of, um, the sort of the improved system and we'll go through some of the differences and what makes it better. Um, there's something I didn't have before that I should just mention, right? Sort of this whole model, all the parameters of this model are just trained end to end, where your training objective is simply, um, working out how accurately you're predicting the start position and how accurately you're predicting the end position so that the attention gives you a probability distribution over start positions and end positions. So you're just being asked what probability estimate are you giving to the true start position and the true end position. And to the extent that though, you know, those aren't one, you've then got loss that is then being sort of summed in terms of log probability. Okay. So how is this model, um, more complex now than what I showed before? Essentially in two main ways. So the first one is looking at the question, we still run the BiLSTM as before. Um, but now what we're going to do is it's a little bit crude just to take the end states of the LSTM and concatenate them together. It turns out that you can do better by making use of all states in an LSTM. And this is true for most tasks where you want some kind of sentence representation from a sequence model. It turns out you can generally gain by using all of them rather than just the endpoints or that. Um, so but this is just an interesting general thing to know again because, you know, this is actually another variant of how that- how you can use attention. There are, you know, a lot of sort of the last two years of neural NLP can be summed up as people have found a lot of clever ways to use attention and that's been pairing just about all the advances. Um, so what we wanna do is we want to have attention over the positions in this LSTM. But, you know, this- we're processing the query first. So it sort of seems like we've got nothing to calculate attention with respect to. So what we do is we just invent something. So we just sort of invent. Here is a vector and it's sometimes called a sentinel or some word like that, but, you know, we just in our PyTorch say, "Here is a vector. Um, we're going to calculate, um, we initialize it randomly, and we're gonna calculate attention with respect to that vector, and we're going to use those attention scores, um, to, um, work out where to pay attention, um, in this BiLSTM, and then we just sort of train that vector so it gets values. And so then we end up with a weighted sum of the time steps of that LSTM that uh, then form the question representation. Um, second change, uh, the pictures only show a shallow BiLSTM but, you know, it turns out you can do better if you have a deep BiLSTM and say use a three-layer deep BiLSTM rather than a single layer. Okay. Then the other changes in the passage representations and this part arguably gets a little bit more hacky, um, but there are things that you can do that make the numbers go up, I guess. Um, okay. So- so firstly for the representation of words rather than only using the GloVe representation that the input vectors are expanded so that- so a named entity recognizer and a part of speech tagger is run. And since those are sort of small sets of values, that the output of those is just one-hot encoded and concatenated onto the word vector, so it represents if it's a location or a person name and whether it's a noun or a verb. Um, word frequency proves to be a bit useful. So there's your concatenating on sort of a representation of the word frequency as, um, just sort of a float of the unigram probability. Um, and then this part is kind of key to getting some further advances which is, well, it turns out that we can do a better job by doing some sort of better understanding of the matching between the question and the passage. And, um, this feature seems like it's very simple but turns out to actually give you quite a lot of value. So you're simply saying for each word in the question, uh, so for each word- well, I said that wrong. For each word in the passage, you were just saying, "Does this word appear in the question?" And if so you're setting a one bit into the input and that's done in three different ways: exact match, uncased match, and lemma match. So that means something like drive and driving, um, will match, and just that sort of indicator of here's where in the passage that's in the question. In theory, the system should be able to work that out anyway that explicitly indicate and it gives quite a bit of value. And then this last one does a sort of a softer version of that where it's using word embedding similarities to sort of calculate a kind of similarity between questions and answers, and that's a slightly complex equation that you can look up. But effectively, um, that you're getting the embedding of words and the question answers. Each of those, you're running through a single hidden layer, neural network, you know, dot producting it, and then putting all that through a Softmax, and that kind of gives you a sort of word similarity score and that helps as well. Okay. So here's the kind of just overall picture this gives you. So if you remember, um, um, there was the sort of the classical NLP with logistic regression baseline, there's around 51. So for sort of a fairly simple model, like the Stanford Attentive Reader, it gives you an enormous boost in performance, right? That's giving you close to 30 percent performance gain. And then, you know, from there, people have kept on pushing up neural systems. But, you know, so this gives you kind of in some sense three quarters of the value over the traditional NLP system and in the much more, um, complex, um, neural systems that come after it. Um, yeah. In terms of error reduction, they're huge but it's sort of more like they're giving you the sort of, um, 12 percent after that. Why did these systems work such a ton better um, than traditional systems? And so we actually did some error analysis of this and, you know, it turns out that most of their gains is because they can just do better semantic matching of word similarities or rephrasings that are semantically related but don't use the same words. So, to- to the extent that the question is where was Christopher Manning born? And the sentence says Christopher Manning was born in Australia, a traditional NLP system would get that right too. But that to the extent that you being able to get it right, depends on being able to match, sort of looser semantic matches so that we understand the sort of um, you know, the place of birth has to be matching was born or something. That's where the neural systems actually do work much much better. Okay. So, that's not the end of the story on question-answering systems. And I wanted to say just a little bit about um, more complex systems to give you some idea um, of what goes on after that. Um, but before I go further into that, are there any questions on uh, up until now, Stanford Attentive Reader? [NOISE] Yeah. I have a question about attention in general. Every example we've seen has been just linear mapping with a weight matrix. Has anybody tried to convert that to a deep neural network and see what happens? Um, so yes they have. Well, at least a shallow neural network. Um, I'll actually show an example of that in just a minute. So maybe I will um, save it till then. But yeah absolutely, um, yeah people have done that and that can be a good thing to um, play with. Anything else? Okay. Um, okay. So, this is a picture of the BiDAF system, so this is the one from AI2 UDub. And the BiDAF system is very well known. Um, it's another sort of classic version of question-answering system that lots of people have used and built off. Um, and, you know, some of it isn't completely different to what we saw before but it has various additions. So, there are word embeddings just like we had before, there's a biLSTM running just like what we had before, and that's being done for both the um, passage and the question. Um, but there are some different things that are happening as well. So one of them is rather than just having word embeddings, it also processes the questions and passages at the character level. And that's something that we're going to talk about coming up ahead in the class. There's been a lot of work at doing character level processing in recent neural NLP, but I don't want to talk about that now. Um, the main technical innovation of the BiDAF model is this attention flow layout because that's in its name bidirectional attention flow. And so, there was a model of attention flow where you have attention flowing in both directions between the query and the passage. And that was their main innovation and it was quite useful in their model. Um, but beyond that, there's you know, sort of more stuff to this model. So after the attention flow layer there's again multiple layers of bidirectional LSTMs running. And then on top of that their output layer is more complex than the sort of simple attention version that I showed previously. So let's just look at that in a bit more detail. Um so, for the attention flow layer. So, the motivation here was in the Stanford Attentive Reader, we used attention to map from the representation of the question onto the words of the passage. But, you know so as questions are whole mapping onto the words of the passage. Where their idea was well, presumably you could do better by mapping in both directions at the word level. So you should be sort of finding passage words that you can map onto question words, and question words that you can map onto passage words. And if you do that in both directions with attention flowing, and then run another round of sequence models on top of that, that you'll just be able to do much better matching between the two of them. And so the way they do that is, um, that they- they've got the bottom- so at the bottom layers they've sort of run these two LSTMs. So they have representations in the LSTM for each word and um, word and passage position. And at this point I have to put it in a slight apology because I just stole the equations and so the letters that are used change. Sorry. But, so these are the um, question individual words and these are the passage individual words. And so, what they're then wanting to do is to say for each passage word, and each question word, I want to work out a similarity score. And the way they work out that similarity score is they build a big concatenated vector. So there's the LSTM representation of the passage word, the question word, and then they throw in a third thing where they do a Hadamard product, so an element-wise product of the question word and the context word. Um, you know, for a neural net purist, throwing in these kind of Hadamard products is a little bit of a cheat because you kind of would hope that a neural net might just learn that this relation between the passage and the question was useful to look at. But you can find a lot of models that put in these kind of Hadamard product because it's sort of a very easy way of sort of having a model that knows that matching is a good idea. Because essentially this is sort of looking for each question and passage word pair. You know, do the vectors look similar in various dimensions? You can sort of access very well from looking at that Hadamard product. So that- so you take that big vector, and you then dot-product that with a learned weight matrix, and that gives you a similarity score between each position in the question and the context. And so then what you're gonna do is use that to define attentions that go in both directions. Um- So for the, um, context, the question attention, this one's completely straightforward. So, you put these similarity scores through a soft-max. So for each of the i positions in the passage or sort of, having a softmax which is giving you a probability distribution, over question words and then you're coming up with a new representation of the i-th position which is then the attention weighted, um, version, the attention weighted average of those question words. Um, so you're sort of, having attention weighted view of the question mapped onto each position in the passage. Um, you then want to do something in the reverse direction. Um, but the one in the reverse direction is done subtly differently. So you're again starting off, um, with the- the same similarity scores but this time they're sort of wanting to, sort of, really assign which position, in which position in the question is the one that's, sort of, aligning the most so that they're finding a max and so that they're finding which is the most aligned one and so then for each of, for each of the i's, they're finding the most aligned question word. And so then they're doing a softmax over these m scores and then those are being used to form a new representation of the passage by, sort of, summing over these attention weights. Okay. So you build these things up and this then gives you a new representation where you have, um, your original representations of the passage words. You'd have a new representation that you've built from this bidirectional attention flow and you look at these sort of Hadamard products of them and that then gives you kind of the output of the BiDAF layer and that output of the BiDAF layer is then what's sort of being fed as the input into these nick- next sequence of LSTM layers. Okay. Um, and so yeah, um, so then that's the modeling layer. You have another two BiLSTM layers and so the way they do the, um, suspense selection is a bit more complex as well. Um, so that they're then, um, sort of taking the output of the modeling layer and putting it through a sort of a dense feed-forward neural network layer and then softmaxing over that, um, and that's then getting a distribution of a start and you're running yet another LSTM kind of a distribution finish. Um, yeah. So, that gives you some idea of a more complex model. Um, you know, in some sense, um, the summary if you go further forward than here is that, sort of, most of the work in the last couple of years, people have been producing progressively more complex architectures with lots of variants of attention and effectively that has been giving good gains. Um, I think I'll skip since time is running, out, showing you that one. But, um, let me just mention this FusionNet model which was done by people at Microsoft because this relates to the answer, the attention question, right? So p- so people have definitely used different versions of attention, right? So that in some of the stuff that we've shown we tend to emphasize this bi-linear attention where you've got two vectors mediated by a matrix. And I guess traditionally at Stanford NLP, we've liked this, um, version of attention since it seems to very directly learn a similarity but other people have used a little neural net. So this is, sort of, a shallow neural net to work out attention scores and there's, sort of, no reason why you couldn't say, maybe it would be even better if I make that a deep neural net and add another layer. Um, and some of, you know, to be perfectly honest, um, some of the results that have been done by people including Google argue that actually that NLP version of attention is better. Um, so there's something to explore in that direction. But actually, um, the people in FusionNet didn't head that direction because they said, "Look, we want to use tons and tons of attention. So we want an attention computation that's pretty efficient and so it's bad news if you have to be evaluating a little dense neural net at every position every time that you do attention." So this bi-linear form is fairly appealing but they then did some playing with it so rather than having a W matrix you can reduce the rank and complexity of your W matrix by dividing it into the product of two lower rank matrices. So you can have a U and a V matrix. And if you make these rectangular matrices that are kind of skinny, you can then have a sort of a lower rank factorization and, that seems a good idea. And then they thought well, maybe really you want your attention distribution to be symmetric. So we can actually put in the middle here, we can have the U and the V, so to speak, be the same and just have a diagonal matrix in the middle and that might be a useful way to think of it. And that all makes sense from linear algebra terms but then they thought, "Oh, non-linearity is really good in deep learning. So why don't we, sort of, stick the left and right half through a ReLU and maybe that will help. [LAUGHTER] Which doesn't so much make sense in your linear algebra terms, um, but that's actually what they ended up using as their, um, attention forms. There are lots of things you can play with when doing your final project. Um, yeah. And, but, you know, their argument is still, you know, that doing attention this way is actually much much cheaper and so they can use a lot of attention. And so they build this very complex tons of attention model, um, which I'm not going to try and explain, um, all of now, um, but I will show you this picture. Um, so a point that they make is that a lot of the different models that people have explored in different years you, that, you know, they're sort of, doing different kinds of attention. That you could be doing attention right, lining up with the original LSTM, you could run both sides through some stuff and do attention, you can do self attention inside your layer that there are a lot of different attentions that different models have explored. And essentially what they are wanting to say is, let's do all of those and let's make it deep and do it all five times and the numbers will go up. And to some extent the answer is, yeah they do and the model ends up scoring very well. Okay, um, so the one last thing I just wanted to mention but not explain is, I mean in the last year there's then been a further revolution in how well people can do these tasks. And so people have developed algorithms which produce contextual word representation. So that means that rather than a traditional word vector, you have a representation for each word in a particular context. So here's the word frog in this particular context and the way people build those representations is using something like a language modeling tasks like Abby talked about, of saying putting probabilities of words in context to learn a context-specific word representation. And ELMo was the first well-known such model. And then people from Google came up with BERT, which worked even better. Um, and so BERT is really in some sense is super complex attention Architecture doing a language modeling like objective. We're going to talk about these later, um, I'm not going to talk about them now, um, but if you look at the current SQuAD 2.0 Leaderboard, um, you will quickly, um - sorry that's- oh I put the wrong slide and that was the bottom of the leaderboard. Oops, slipped at the last minute. If you go back to my slide which had the top of the leaderboard, um, you will have noticed that the top of the leaderboard, every single one of the top systems uses BERT. So that's something that you may want to consider but you may want to consider how you could use it as a sub-module which you could add other stuff too as many of these systems do. Okay. Done for today.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_1_Introduction_and_Word_Vectors.txt
Okay. Hello everyone. [LAUGHTER] Okay we should get started. Um, they're actually are still quite a few seats left. If you wanna be really bold, there are a couple of seats right in front of me in the front row. If you're less bolder a few over there. Um, but they're also on some of the rows are quite a few middle seat. So if people wanted to be really civic minded some people could sort of squeeze towards the edges and make more accessible um, some of the seats that still exist in the classroom. Okay. Um, so, um, it's really exciting and great to see so many people here. So I'm a hearty welcome to CS224N and occasionally also known as Ling 284 which is Natural Language Processing with Deep Learning. Um, as just a sort of a personal anecdote, is still sort of blows my mind that so many people turn up to this class these days. So, for about the first decade that I taught NLP here, you know the number of people I got each year was approximately 45. [LAUGHTER] So it's an order of [LAUGHTER] magnitude smaller than it is now but guess it says quite a lot on about what a revolutionary impact that artificial intelligence in general and machine learning, deep learning, NLP are starting to have in modern society. Okay. So this is our plan for today. So, um, um, we're really gonna get straight down to business today. So they'll be a brief, very brief introduction some of the sort of course logistics, very brief discussion and talk about human language and word meaning and then we wanna get right into talking about um, the first thing that we're doing which is coming up with word vectors and looking at the word2vec algorithm and that will then sort of fill up the rest of the class. There are still two seats right in the front row for someone who wants to sit right in front of me, just letting you know [LAUGHTER]. Okay. Okay. So here are the course logistics in brief. So I'm Christopher Manning, the person who bravely became the head TA is Abigail See is right there. And then we have quite a lot of wonderful TA's. To the people who are wonderful TA's just sort of stand up for one moment. So, um, [LAUGHTER] we have some sense for wonderful TAs. [LAUGHTER] Okay great. Um, okay. So you know when the lecture is because you made it here and so welcome also to SCPD people. This is also an SCPD class and you can watch it on video. But we love for Stanford students to turn up and show their beautiful faces in the classroom. Okay. So, um, the web-page has all the info about syllabus et cetera et cetera. Okay. So this class what do we hope to teach? So, one thing that we wanna teach is, uh, you know, an understanding of effective modern methods for deep learning. Starting off by reviewing some of the basics and then particularly talking about the kinds of techniques including um, recurrent networks and attention that are widely used for natural language processing models. A second thing we wanna teach is a big picture understanding of human languages and some of the difficulties in understanding and producing them. Of course if you wanna know a lot about human languages, there's a whole linguistics department and you can do a lot of courses of that. Um, but so I wanna give at least some appreciation so you have some clue of what are the challenges and difficulties and varieties of human languages. And then this is also kind of a practical class. Like we actually wanna teach you how you can build practical systems that work for some of the major parts of NLP. So if you go and get a job at one of those tech firms and they say "Hey, could you build us a named entity recognizer?" You can say "Sure, I can do that." And so for a bunch of problems, obviously we can't do everything, we're gonna do word meaning, dependency parsing, machine translation and you have an option to do question answering, I'm actually building systems for those. If you'd been talking to friends who did the class in the last couple of years, um, here are the differences for this year just to get things straight. Um, so we've updated some of the content of the course. So, uh, between me and guest lectures there's new content. Well that look bad. Wonder if that will keep happening, we'll find out. There's new content and on various topics that are sort of developing areas. One of the problems with this course is really big area of deep learning at the moment is still just developing really really quickly. So, it's sort of seems like one-year-old content is already things kind of data and we're trying to update things. A big change that we're making this year is we're having five-one week assignments instead of three-two week assignments at the beginning of the course and I'll say a bit more about that in a minute. Um, this year we're gonna use PyTorch instead of TensorFlow, and we'll talk about that more later too. Um, we're having the assignments due before class on either Tuesday or Thursday. So you're not distracted and can come to class. So starting off, um, yeah. So we're trying to give an easier, gentler ramp-up but on the other hand a fast ramp-up. So we've got this first assignment which is sort of easy, uh, but it's available right now and is due next Tuesday. And the final thing is we're not having a midterm this year. Um, okay. So this is what we're doing. So there are five of these assignments that I just mentioned. Um, So six percent for the first one, 12 percent for each of the other ones, um, and, I already said that. We're gonna use gradescope for grading. It'll be really help out the TAs if you could use your SUnet ID as your gradescope account ID. Um, so then for the second part of the course, people do a final project and there are two choices for the final project. You can either do our default final project, which is a good option for many people, or you can do a custom final project and I'll talk about that in the more in the beginning. This is not working right. Um, and so then at the end we have a final poster presentation session at which your attendance is expected, and we're gonna be having that Wednesday in the evening. Probably not quite five hours but it'll be within that window, we'll work out the details in a bit. Three percent for participation, see the website for details. Six late days, um, collaboration, like always in computer science classes, we want you to do your own work and not borrow stuff from other people's Githubs and so we really do emphasize that you should read and pay attention to collaboration policies. Okay. So here's the high level plan for the problem sets. So, homework one available right now, is a hopefully easy on ramp. That's on iPython notebook, just help get everyone up to speed. Homework two is pure Python plus numpy but that will start to kind of teach you more about the sort of underlying, how do we do deep learning. If you're not so good or a bit rusty or never seen um, Python or numpy, um, we're gonna have an extra section on Friday. So Friday from 1:30 to 2:50 um, in Skilling Auditorium, we'll have a section that's a Python review. That's our only plan section at the moment, we're not gonna have a regular section. Um, so encourage to go to that and that will also be recorded for SCPD and available for video as well. Um, then Homework three um, will start us on using PyTorch. And then homeworks four and five we're then gonna be using py- PyTorch on GPU and we're actually gonna be using Microsoft Azure with big thank yous to the kind Microsoft Azure people who have sponsored our GPU computing for the last um, three years. Um, yes. So basically I mean all of modern deep learning has moved to the use of one or other of the large deep learning libraries like PyTorch TensorFlow, Chainer or MXNet um, et cetera and then doing the computing on GPU. So of course since we're in the one building, we should of course be using, um, GPUs [LAUGHTER] but I mean in general the so parallelisms scalability of GPUs is what's powered most of modern deep learning. Okay. The final project. So for the final project there are two things that you can do. So we have a default final project which is essentially our final project in a box. And so this is building a question answering system and we do it over the squad dataset. So what you build and how you can improve your performance is completely up to you. It is open-ended but it has an easier start, a clearly defined objective and we can have a leaderboard for how well things are working. Um, so if you don't have a clear research objective that can be a good choice for you or you can propose the custom Final Project and assuming it's sensible, we will approve your custom final project, we will give you feedback, um, form someone as a mentor, um, and either way for only the final project we allow teams of one, two or three. For the homework should expect it to do them yourself. Of course you can chat to people in a general way about the problems. Okay. So that is the course. All good, and not even behind schedule yet. Okay. So the next section is human language and word meaning.Um. You know, if I was um, really going to tell you a lot about human language that would take a lot of time um, which I don't really have here. So I'm just going to tell you um, two anecdotes about human language. And the first is this XKCD cartoon. Um, and I mean this isn't, and I don't know why that's happening. I'm not sure what to make of that. Um, so, I actually really liked this XKCD cartoon. It's not one of the classic ones that you see most often around the place, but I actually think it says a lot about language and is worth thinking about. Like I think a lot of the time for the kind of people who come to this class who are mainly people like CS people, and EE people and random others. There's some other people I know since these people linguists and so on around. But for a lot of those people like, you've sort of spent your life looking at formal languages and the impression is that sort of human language as a sort of somehow a little bit broken formal languages, but there's really a lot more to it than that, right? That language is this amazing um, human created system that is used for all sorts of purposes and is adaptable to all sorts of purposes. So you can do everything from describing mathematics and human language um to sort of nuzzling up to your best friend and getting them to understand you better. So there's actually an amazing thing of human language. Anyway, I'll just read it. Um, so it's the first person, the dark haired person says, "Anyway, I could care less." And her friend says, "I think you mean you couldn't care less." Saying you could care less implies you care at least some amount. And the dark haired person says, "I don't know, we're these unbelievably complicated brains drifting through a void trying in vain to connect with one another by blindly flinging words out into the darkness." Every choice of phrasing and spelling, and tone, and timing carries countless signals and contexts and subtexts and more. And every listener interprets those signals in their own way. Language isn't a formal system, language is glorious chaos. You can never know for sure what any words will mean to anyone. All you can do is try to get better at guessing how your words affect people so you can have a chance of finding the ones that will make them feel something like what you want them to feel. Everything else is pointless. I assume you're giving me tips on how you interpret words because you want me to feel less alone. If so, thank you. That means a lot. But if you're just running my sentences past some mental checklist so you can show off how well you know it, then I could care less. [NOISE] Um, and so I think um, I think actually this has some nice messages about how language is this uncertain evolved system of communication but somehow we have enough agreed meaning that you know, we can kind of pretty much communicate. But we're doing some kind of you know probabilistic inference of guessing what people mean and we're using language not just for the information functions but for the social functions etc etc. Okay. And then here's my one other thought I had review about language. So, essentially if we want to have artificial intelligence that's intelligent, what we need to somehow get to the point of having compu- computers that have the knowledge of human beings, right? Because human beings have knowledge that gives them intelligence. And if you think about how we sort of convey knowledge around the place in our human world, mainly the way we do it is through human language. You know, some kinds of knowledge you can sort of work out for yourself by doing physical stuff right, I can hold this and drop that and I've learnt something. So I have to learn a bit of knowledge there. But sort of most of the knowledge in your heads and why you're sitting in this classroom has come from people communicating in human language to you. Um, so one of the famous, most famous steep learning people Yann Le Cun, he likes to say this line about, oh, you know really I think that you know there's not much difference between the intelligence of human being and orangutan. And I actually think he's really wrong on that. Like the sense in which he means that is, an orangutan has a really good vision system. Orangutans have very good you know control of their arms just like human beings for picking things up. Orangutans um can use tools um and orangutans can make plans so that if you sort of put the food somewhere where they have to sort of move the plank to get to the island with the food they can do a plan like that. So yeah, in a sense they've got a fair bit of intelligence but you know, sort of orangutans just aren't like human beings. And why aren't they like human beings? And I'd like to suggest to you the reason for that is what human beings have achieved is, we don't just have sort of one computer like a you know dusty old IBM PC in your mother's garage. What we have is a human computer network. And the way that we've achieved that human computer network is that, we use human languages as our networking language. Um, and so, when you think about it um, so on any kind of evolutionary scale language is super super super super recent, right? That um, creatures have had vision for people don't quite know but you know, maybe it's 75 million years or maybe it's longer, right? A huge length of time. How long have human beings have had language? You know people don't know that either because it turns out you know, when you have fossils, you can't knock the skull on the side and say, do you not have language. Um, but you know, most people estimate that sort of language is a very recent invention before current human beings moved out of um, out of Africa. So that many people think that we've only had language for something like a 100,000 years or something like that. So that's sort of you know blink of an eye on the evolutionary timescale. But you know, it was the development of language [inaudible] that sort of made human beings invisible- [NOISE] in invincible, right? It wasn't that, human beings um, developed poison fangs or developed ability to run faster than any other creature or put a big horn on their heads or something like that, right? You know, humans are basically pretty puny um, but they had this um, unbeatable advantage that they could communicate with each other and therefore work much more effectively in teams. And that sort of basically made human beings invincible. But you know, even then humans were kind of limited, right? That kind of got you to about the Stone Age right, where you could bang on your stones and with the right kind of stone make something sharp to cut with. Um, what got humans beyond that, was that they invented writing. So writing was then an ability where you could take knowledge not only communicated um mouth to mouth to people that you saw. You could put it down on your piece of papyrus so your clay tablet or whatever it was at first and that knowledge could then be sent places. It could be sent spatially around the world and it could then be sent temporally through time. And well, how old is writing? I mean, we sort of basically know about how old writing is, right? That writing is about 5,000 years old. It's incredibly incredibly recent on this scale of evolution but you know, essentially writing was so powerful as a way of having knowledge that then in those 5,000 years that enabled human beings to go from stone age sharp piece or flint to you know, having iPhones and all of these things, all these incredibly sophisticated devices. So, language is pretty special thing I'd like to suggest. Um, but you know, if I go back to my analogy that sort of it's allowed humans to construct a networked computer that is way way more powerful than um, just having individual creatures as sort of intelligent like an orangutan. Um, and you compare it to our computer networks, it's a really funny kind of network, right? You know that these days um, we have networks that run around where we have sort of large network bandwidth, right? You know, we might be frustrated sometimes with our Netflix downloads but by and large you know, we can download hundreds of megabytes really easily and quickly. And we don't think that's fast enough, so we're going to be rolling out 5G networks. So it's an order of magnitude faster again. I mean, by comparison to that, I mean, human language is a pathetically slow network, right? That the amount of information you can convey by human language is very slow. I mean you know, whatever it is I sort of speak at about 15 words a second right, you can start doing um, your information theory if you know some right? But um, you don't actually get much bandwidth at all. And that then leads- so you can think of, how does it work then? So, humans have come up with this incredibly impressive system which is essentially form of compression. Sort of a very adaptive form of compression, so that when we're talking to people, we assume that they have an enormous amount of knowledge in their heads which isn't the same as but it's broadly similar to mine when I'm talking to you right? That you know what English words mean, and you know a lot about how the wor- world works. And therefore, I can say a short message and communicate only a relatively short bit string and you can actually understand a lot. All right? So, I can say sort of whatever you know, imagine a busy shopping mall and that there are two guys standing in front of a makeup counter, and you know I've only said whatever that was sort of about 200 bits of information but that's enabled you to construct a whole visual scene that we're taking megabytes to um, represent as an image. So, that's why language is good. Um, so from that more authorial level, I'll now move back to the concrete stuff. What we wanna do in this class is not solve the whole of language, but we want to represent, um, the meaning of words, right? So, a lot of language is bound up in words and their meanings and words can have really rich meanings, right? As soon as you say a word teacher, that's kinda quite a lot of rich meaning or you can have actions that have rich meaning. So, if I say a word like prognosticate or, um, total or something you know, these words that have rich meanings and a lot of nuance on them. And so we wanna represent meaning. And so, the question is what is meaning? So, you can of course you can- dictionaries are meant to tell you about meanings. So, you can look up dictionaries um, and Webster says sort of tries to relate meaning to idea. The idea that is represented by a word or a phrase. The idea that a person wants to express by word signs et cetera. I mean, you know, you could think that these definitions are kind of a cop-out because it seems like they're rewriting meaning in terms of the word idea, and is that really gotten you anywhere. Um, how do linguists think about meaning? I mean, the most common way that linguists have thought about meaning is an idea that's called denotational semantics which is also used in programming languages. So, the idea of that is we think of meaning as what things represent. So, if I say the word chair, the denotation of the word chair includes this one here and that one, that one, that one, that one. And so, the word chair is sort of representing all the things that are chairs and you can sort of, um, you can then think of something like running as well that you know there's sort of sets of actions that people can partake that- that's their denotation. And that's sort of what you most commonly see in philosophy or linguistics as denotation. It's kind of a hard thing to get your hands on, um, computationally. So, um, what type of people most commonly do or use the most commonly do I guess I should say now for working out the meaning of words on the computer that commonly that turn to something that was a bit like a dictionary. In particular favorite online thing was this online thesaurus called WordNet which sort of tells you about word meanings and relationships between word meanings. Um, so this is just giving you the very slices sense of, um, of what's in WordNet. Um, so this is an actual bit of Python code up there which you can, um, type into your computer and run and do this for yourself. Um, so this uses a thing called NLTK. Um, so NLTK is sort of like the "Swiss Army Knife of NLP" meaning that it's not terribly good for anything, but it has a lot of basic tools. So, if you wanted to do something like just get some stuff out of WordNet and show it, it's the perfect thing to use. Um, okay. So, um, from NLTK I'm importing WordNet and so then I can say, "Okay, um, for the word good tell me about the synonym sets with good participates in." And there's good goodness as a noun. There is an adjective good. There's one estimable good, honorable, respectable. Um, this looks really complex and hard to understand. But the idea of word- WordNet makes these very fine grain distinctions between senses of a word. So, what sort of saying for good, um, there's what some sensors where it's a noun, right? That's where you sort of, I bought some goods for my trip, right? So, that's sort of, um, one of these noun sensors like this one I guess. Um, then there are adjective sensors and it's trying to distinguish- there's a basic adjective sense of good being good, and then in certain, um, sensors, there are these extended sensors of good in different directions. So, I guess this is good in the sense of beneficial, um, and this one is sort of person who is respectable or something. He's a good man or something like that, right? So, um, but you know, part of what's kind of makes us think very problematic and practice to use is it tries to make all these very fine-grain differences between sensors that are a human being can barely understand the difference between them um, and relate to. Um, so you can then do other things with WordNet. So, this bit of code you can sort of well walk up and is a kind of hierarchy. So, it's kinda like a traditional, um, database. So, if I start with a panda and say- [NOISE] if I start with a panda. Um, and walk up, um, the pandas are [inaudible]. Maybe you'd guys to bio which are carnivores, placentals, mammals, blah, blah, blah. Okay, so, um, that's the kind of stuff you can get out to- out of WordNet. Um, you know, in practice WordNet has been. Everyone sort of used to use it because it gave you some sort of sense of the meaning of the word. But you know it's also sort of well-known. It never worked that well. Um, so you know that sort of the synonym sets miss a lot of nuance. So, you know one of the synonym sets for good has proficient in it and good sort of like proficient but doesn't proficient have some more connotations and nuance? I think it does. Um, WordNet like most hand built resources is sort of very incomplete. So, as soon as you're coming to new meanings of words, or new words and slang words, well then, that gives you nothing. Um, it's sort of built with human labor, um, in ways that you know it's hard to sort of create and adapt. And in particular, what we want to focus on is, seems like a basic thing you'd like to do with words and it's actually at least understand similarities and relations between the meaning of words. And it turns out that you know WordNet doesn't actually do that that well because it just has these sort of fixed discrete synonym sets. So, if you have a words in a synonym said that there's sort of a synonym and maybe not exactly the same meaning, they're not in the same synonyms set, you kind of can't really measure the partial resemblance as a meaning for them. So, if something like good and marvelous aren't in the same synonym set, but there's something that they share in common that you'd like to represent. Okay. So, um, that's kinda turn to lead into us wanting to do something different and better for word meaning. And, um, before getting there I just sort of wanna again sort of build a little from traditional NLP. So, traditional NLP in the context of this course sort of means Natural Language Processing up until approximately 2012. There were some earlier antecedents but as basically, um, in 2013 that things really began to change with people starting to use neural net style representations for natural language processing. So, up until 2012, um, standardly you know we had words. They are just words. So, we had hotel conference motel. They were words, and we'd have you know lexicons and put words into our model. Um, and in neural networks land this is referred to as a localist representation. I'll come back to those terms again next time. But that's sort of meaning that for any concept there's sort of one particular, um, place which is the word hotel or the word motel. A way of thinking about that is to think about what happens when you build a machine learning model. So, if you have a categorical variable like you have words with the choice of word and you want to stick that into some kind of classifier in a Machine Learning Model, somehow you have to code that categorical variable, and the standard way of doing it is that you code it by having different levels of the variable which means that you have a vector, and you have, this is the word house. This is the word cat. This is the word dog. This is the word some chairs. This is the word agreeable. This is the word something else. This is the word, um, hotel, um, and this is another word for something different, right? So that you have put a one at the position and neural net land we call these one-hot vectors, and so these might be, ah, one-hot vectors for hotel and motel. So, there are a couple of things that are bad here. Um, the one that's sort of, ah, practical nuisance is you know languages have a lot of words. Ah, so, it's sort of one of those dictionaries that you might have still had in school that you probably have about 250,000 words in them. But you know, if you start getting into more technical and scientific English it's easy to get to a million words. I mean, actually the number of words that you have in a language, um, like English is actually infinite because we have these processes which are called derivational morphology, um, where you can make more words by adding endings onto existing words. So, you know you can start with something like paternalist, fatherly, and then you can sort of say from maternal, you can say paternalist, or paternalistic, paternalism and pa- I did it paternalistically. Right? Now all of these ways that you can bake bigger words by adding more stuff into it. Um, and so really you end up with an infinite space of words. Um, yeah. So that's a minor problem, right? We have very big vectors if we want to represent a sensible size vocabulary. Um, but there's a much bigger problem than that, which is, well, precisely what we want to do all the time, is we want to, sort of, understand relationships and the meaning of words. So, you know, an obvious example of this is web search. So, if I do a search for Seattle motel, it'd be useful if it also showed me results that had Seattle hotel on the page and vice versa because, you know, hotels and motels pretty much the same thing. Um, but, you know, if we have these one-hot vectors like we had before they have no s- similarity relationship between them, right? So, in math terms, these two vectors are orthogonal. No similarity relationship between them. Um, and so you, kind of, get nowhere. Now, you know, there are things that you could do, I- I just showed you WordNet's. WordNet's shows you some synonyms and stuff. So that might help a bit. There are other things you could do. You could sort of say, well wait, why don't we just build up a big table where we have a big table of, um, word similarities, and we could work with that. And, you know, people used to try and do that, right? You know, that's sort of what Google did in 2005 or something. You know, it had word similarity tables. The problem with doing that is you know, we were talking about how maybe we want 500,000 words. And if you want to build up then a word similarity table out of our pairs of words from one-hot representations, um, you- that means that the size of that table, as my math is pretty bad, is it 2.5 trillion? It's some very big number of cells in your similarity, um, matrix. So that's almost impossible to do. So, what we're gonna instead do is explore a method in which, um, we are going to represent words as vectors, in a way I'll show you just, um, a minute in such a way that just the representation of a word gives you their similarity with no further work. Okay. And so that's gonna lead into these different ideas. So, I mentioned before denotational semantics. Here's another idea for representing the meaning of words, um, which is called distributional semantics. And so the idea of distributional semantics is, well, how are we going to represent the meaning of a word is by looking at the contexts, um, in which it appears. So, this is a picture of JR Firth who was a British linguist. Um, he's famous for this saying, "You shall know a word by the company it keeps." Um, but another person who's very famous for developing this notion of meaning is, um, the philosopher Ludwig- Ludwig Wittgenstein in his later writings, which he referred to as a use theory of meeting- meaning. Well, actually he's- he used some big German word that I don't know, but, um, we'll call it a use theory of meaning. And, you know, essentially the point was, well, you know, if you can explain every- if- if you can explain what contexts it's correct to use a certain word, versus in what contexts would be the wrong word to use, this maybe gives you bad memories of doing English in high school, when people said, ah, that's the wrong word to use there, um, well, then you understand the meaning of the word, right? Um, and so that's the idea of distributional semantics. And it's been- so one of the most successful ideas in modern statistical NLP because it gives you a great way to learn about word meaning. And so what we're gonna do is we're going to say, haha, I want to know what the word banking means. So, I'm gonna grab a lot of texts, which is easy to do now when we have the World Wide Web, I'll find lots of sentences where the word banking is used, Government debt problems turning into banking crises as happened in 2009. And both these- I'm just going to say all of this stuff is the meaning of the word banking. Um, that those are the contexts in which the word banking is used. And that seems like very simple and perhaps even not quite right idea, but it turns out to be a very usable idea that does a great job at capturing meaning. And so what we're gonna do is say rather than our old localist representation we're now gonna represent words in what we call a distributed representation. And so, for the distributed representation we're still going to [NOISE] represent the meaning of a word as a numeric vector. But now we're going to say that the meaning of each word is, ah, smallish vector, um, but it's going to be a dense vector where by all of the numbers are non-zero. So the meaning of banking is going to be distributed over the dim- dimensions of this vector. Um, now, my vector here is of dimension nine because I want to keep the slide, um, nice. Um, life isn't quite that good in practice. When we do this we use a larger dimensionality, kinda, solid the minimum that people use is 50. Um, a typical number that you might use on your laptop is 300 if you want to really max out performance, um, maybe 1,000, 2,000, 4,000. But, you know, nevertheless [NOISE] orders of magnitude is smaller compared to a length 500,000 vector. Okay. So we have words with their vector representations. And so since each word is going to have a vector, um, representation we then have a vector space in which we can place all of the words. Um, and that's completely unreadable, um, but if you zoom into the vector space it's still completely unreadable. But if you zoom in a bit further, um, you can find different parts of this space. So here's the part that where countries attending to, um, exist Japanese, German, French, Russian, British Australian American, um, France, Britain, Germany et cetera. And you can shift over to a different part of the space. So here's a part of the space where various verbs are, so has have, had, been, be. Oops. Um, um, [inaudible] be always was where. You can even see that some morphological forms are grouping together, and things that sort of go together like say, think expect to things that take those, kind of, compliment. He said or thought something. Um, they group together. Now, what am I actually showing you here? Um, you know, really this was built from, ah, 100 dimensional word vectors. And there is this problem is really hard to visualize 100 dimensional word vectors. So, what is actually happening here is these, um, 100 dimensional word vectors are being projected down into two-dimensions, and you're so- seeing the two-dimensional view, which I'll get back to later. Um, so, on the one hand, um, whenever you see these pictures you should hold on to the your wallet because there's a huge amount of detail on the original vector space that got completely killed and went away, um, in the 2D projection, and indeed some of what push things together in the 2D, um, projection may really, really, really misrepresent what's in the original space. Um, but even looking at these 2D representations, the overall feeling is, my gosh this actually sort of works, doesn't it? Um, we can sort of see similarities, um, between words. Okay. So, um, ha- so that was the idea of what we want to do. Um, the next part, um, is then how do we actually go about doing it? I'll pause for breath for half a minute. Has anyone got a question they're dying to ask? [NOISE] Yeah. Where were the- the vectors is each, um, had a different order in each contact, like, say the first decimal vector, second decimal vector, are those standard across all theory or people choose them themselves? Um, they're not standards across NLP um and they're not chosen at all. So what we're gonna present is a learning algorithm. So where we just sort of shuffle in lots of text and miraculously these word vectors come out. And so the l- learning algorithm itself decides the dimensions. But um that actually reminds me of something I sort of meant to say which was yeah, I mean, since this is a vector space, in some sense the dimensions over the arbitrary right, because you can you know just have your basis vectors in any different direction and you could sort of re-represent, um the words in the vector space with a different set of basics, basis vectors and it'd be exactly the same vector space just sort of rotate around to your new um, vectors. So, you know, you shouldn't read too much into the sort of elements. So, it actually turns out that because of the way a lot of deep learning um operations work, some things they do, do element-wise. So that the dimensions do actually tend to get some meaning to them it turns out. But um, though I think I really wanted to say was, that you know one thing we can just think of is how close things are in the vector space and that's a notion of meaning similarity that we are going to exploit. But you might hope that you get more than that, and you might actually think that there's meaning in different dimensions and directions in the word vector space. And the answer to that is there is and I'll come back to that a bit later. Okay. Um, so in some sense this thing that had the biggest impact um in sort of turning the world of NLP in a neural networks direction was that picture. Um, was this um algorithm that um Thomas Mikolov came up with in 2013 called the word2vec algorithm. So it wasn't the first work and having distributed representations of words. So there was older work from Yoshua Bengio that went back to about the sort of turn on the millennium, that somehow it's sort of hadn't really sort of hit the world over their head and had a huge impact and has really sort of Thomas Mikolov showed this very simple, very scalable way of learning vector representations of um words and that sort of really opened the flood gates. And so that's the algorithm that I'm going to um show now. Okay. So the idea of this algorithm is you start with a big pile of text. Um, so wherever you find you know web pages on newspaper articles or something, a lot of continuous text, right? Actual sentences because we want to learn wo- word meaning context. Um, NLP people call a large pile of text a corpus. And I mean that's just the Latin word for body, right? It's a body of text. Important things to note if you want to seem really educated is in Latin, this is a fourth declensions noun. So the plural of corpus is corpora. And whereas if you say core Pi everyone will know that you didn't study Latin in high school. [LAUGHTER] Um, okay. Um, so right- so we then want to say that every word um in a- in a fixed vocabulary which would just be the vocabulary the corpus is um represented by a vector. And we just start those vectors off as random vectors. And so then what we're going to do is do this big iterative algorithm where we go through each position in the text. We say, here's a word in the text. Let's look at the words around it and what we're going to want to do is say well, the meaning of a word is its contexts of use. So we want the representation of the word in the middle to be able to predict the words that are around it and so we're gonna achieve that by moving the position of the word vector. And we just repeat that a billion times and somehow a miracle occurs and outcomes at the end we have a word vector space that looks like a picture I showed where it has a good meaning of word meet good representation of word meaning. So slightly more, um, um, slightly more um graphically right. So here's the situation. So we've got part of our corpus problems turning into banking crisis, and so what we want to say is well, we want to know the meaning of the word into and so we're going to hope that its representation can be used in a way that'll make precise to predict what words appear in the context of into because that's the meaning of into. And so we're going to try and make those predictions, see how well we can predict and then change the vector representations of words in a way that we can do that prediction better. And then once we've dealt with into, we just go onto the next word and we say, okay, let's take banking as the word. The meaning of banking is predicting the contexts in which banking occurs. Here's one context. Let's try and predict these words that occur around banking and see how we do and then we'll move on again from there. Okay. Um, sounds easy so far. Um, [NOISE] now we go on and sort of do a bit more stuff. Okay. So overall, we have a big long corpus of capital T words. So if we have a whole lot of documents we just concatenate them all together and we say, okay, here's a billion words, and so big long list of words. And so what we're gonna do, is for the first um product we're going to sort of go through all the words and then for the second product, we're gonna say- we're gonna choose some fixed size window, you know, it might be five words on each side or something and we're going to try and predict the 10 words that are around that center word. And we're going to predict in the sense of trying to predict that word given the center word. That's our probability model. And so if we multiply all those things together, that's our model likelihood is how good a job it does at predicting the words around every word. And that model likelihood is going to depend on the parameters of our model which we write as theta. And in this particular model, the only parameters in it is actually going to be the vector representations we give the words. The model has absolutely no other parameters to it. So, we're just going to say we're representing a word with a vector in a vector space and that representation of it is its meaning and we're then going to be able to use that to predict what other words occur in a way I'm about to show you. Okay. So, um, that's our likelihood and so what we do in all of these models is we sort of define an objective function and then we're going to be, I want to come up with vector representations of words in such a way as to minimize our objective function. Um, so objective function is basically the same as what's on the top half of the slide, but we change a couple of things. We stick a minus sign in front of it so we can do minimization rather than maximization. Completely arbitrary makes no difference. Um, we stick a one and T in front of it, so that we're working out the sort of average as of a goodness of predicting for each choice of center word. Again, that sort of makes no difference but it kinda keeps the scale of things ah not dependent on the size of the corpus. Um, the bit that's actually important is we stick a log in front of the function that was up there um because it turns out that everything always gets nice. So when you stick logs and find the products um when you're doing things like optimization. So, when we do that we then got a log of all these products which will allow us to turn things you know, into a sums of the log of this probability and we'll go through that again um in just a minute. Okay. Um, and so if we can mi- if we can change our vector representations of these words so as to minimize this J of theta, that means we'll be good at predicting words in the context of another word. So then, that all sounded good but it was all dependent on having this probability function where you wanna predict the probability of a word in the context given the center word and the question is, how can you possibly do that? Um, well um, remember what I said is actually our model is just gonna have vector representations of words and that was the only parameters of the model. Now, that's, that's almost true. It's not quite true. Um, we actually cheat slightly. Since we actually propose two vector representations for each word and this makes it simpler to do this. Um, you cannot do this, there are ways to get around it but this is the simplest way to do it. So we have one vector for word when it's the center word that's predicting other words but we have a second vector for each word when it's a context word, so that's one of the words in context. So for each word type, we have these two vectors as center word, as context word. Um, so then we're gonna work out this probability of a word in the context, given the center word, purely in terms of these vectors and the way we do it is with this equation right here, which I'll explain more in just a moment. So we're still on exactly the same situation, right? That we're wanting to work out probabilities of words occurring in the context of our center word. So the center word is C and the context words represented with O and these [inaudible] slide notation but sort of, we're basically saying there's one kind of vector for center words is a different kind of vector for context words and we're gonna work out this probabilistic prediction um, in terms of these word vectors. Okay. So how can we do that? Well, the way we do it is with this um, formula here which is the sort of shape that you see over and over again um, in deep learning with categorical staff. So for the very center bit of it, the bit in orange are more the same thing occurs in the um, denominator. What we're doing there is calculating a dot product. So, we're gonna go through the components of our vector and we're gonna multiply them together and that means if um, different words have B components of the same sign, plus or minus, in the same positions, the dot product will be big and if they have different signs or one is big and one is small, the dot product will be a lot smaller. So that orange part directly calculates uh, sort of a similarity between words where the similarity is the sort of vectors looking the same, right? Um, and so that's the heart of it, right? So we're gonna have words that have similar vectors, IS close together in the vector space have similar meaning. Um, so for the rest of it- um, so the next thing we do is take that number and put an X around it. So, um, the exponential has this nice property that no matter what number you stick into it, because the dot product might be positive or negative, it's gonna come out as a positive number and if we eventually wanna get a probability, um, that's really good. If we have positive numbers and not negative numbers, um, so that's good. Um, then the third part of which is the bid in blue is we wanted to have probabilities and probabilities are meant to add up to one and so we do that in the standard, dumbest possible way. We sum up what this quantity is, that every different word in our vocabulary and we divide through by it and so that normalizes things and turns them into a probability distribution. Yeah, so there's sort of in practice, there are two parts. There's the orange part which is this idea of using dot product and a vector space as our similarity measure between words and then the second part is all the rest of it where we feed it through what we refer to a news all the time as a softmax distribution. So the two parts of the expen normalizing gives you a softmax distribution. Um, and softmax functions will sort of map any numbers into a probability distribution always for the two reasons that I gave and so, it's referred to as a softmax um, because it works like a softmax, right? So if you have numbers, you could just say what's the max of these numbers, um, and you know that's sort of a hot- if you sort of map your original numbers into, if it's the max of the max and everything else is zero, that's sort of a hard max. Um, soft- this is a softmax because the exponenti- you know, if you sort of imagine this but- if we just ignore the problem negative numbers for a moment and you got rid of the exp, um, then you'd sort of coming out with a probability distribution but by and large it's so be fairly flat and wouldn't particularly pick out the max of the different XI numbers whereas when you exponentiate them, that sort of makes big numbers way bigger and so, this, this softmax sort of mainly puts mass where the max's or the couple of max's are. Um, so that's the max part and a soft part is that this isn't a hard decisions still spreads a little bit of probability mass everywhere else. Okay, so now we have uh, loss function. We have a loss function with a probability model on the inside that we can build and so what we want to be able to do is then um, move our vector representations of words around so that they are good at predicting what words occur in the context of other words. Um, and so, at this point what we're gonna do is optimization. So, we have vector components of different words. We have a very high-dimensional space again but here, I've just got two for the picture and we're gonna wanna say how- how can we minimize this function and we're going to want to jiggle the numbers that are used in the word representations in such a way that we're walking down the slope of this space. I walking down the gradient and um, then we're gonna minimize the function we found good representations for words. So doing this for this case, we want to make a very big vector in a very high-dimensional vector space of all the parameters of our model and the only parameters that this model has is literally the vector space representations of words. So if there are a 100 dimensional word representations, they're sort of a 100 parameters for aardvark and context, 100 parameters for the word a- in context et cetera going through, 100 parameters for the word aardvark [NOISE] as a center word et cetera, et cetera through that gives us a high big vector of parameters to optimize and we're gonna run this optimization and then um, move them down. Um, [NOISE] yeah so that's essentially what you do. Um, I sort of wanted to go through um, the details of this um, just so we've kind of gone through things concretely to make sure everyone is on the same page. Um, so I suspect that, you know, if I try and do this concretely, um, there are a lot of people um, that this will bore and some people that are- will bore very badly, um, so I apologize for you, um, but you know, I'm hoping and thinking that there's probably some people who haven't done as much of this stuff recently and it might just actually be good to do it concretely and get everyone up to speed right at the beginning. Yeah? [inaudible] how do we calculate [inaudible] specifically? Well, so, we- so the way we calculate the, the U and V vectors is we're literally going to start with a random vector for each word and then we iteratively going to change those vectors a little bit as we learn. And the way we're going to work out how to change them is we're gonna say, "I want to do optimization," and that is going to be implemented as okay. We have the current vectors for each word. Let me do some calculus to work out how I could change the word vectors, um, to mean, that the word vectors would calculate a higher probability for the words that actually occur in contexts of this center word. And we will do that, and we'll do it again and again and again, and then will eventually end up with good word vectors. Thank you for that question, cause that's a concept that you're meant to have understood. Is that how this works and maybe I didn't explain that high-level recipe well enough, yeah. Okay, so yeah, so let's just go through it. So, we've seen it, right? So, we had this formula that we wanted to maximize, you know, our original function which was the product of T equals one to T, and then the product of the words, uh, position minus M less than or equal to J, less than or equal to M, J not equal to zero of, um, the probability of W. At prime at T plus J given WT according to the parameters of our model. Okay, and then we'd already seen that we were gonna convert that into the function that we're going to use where we have J of Theta, where we had the minus one on T. Of the sum of T equals one to T of the sum of minus M, less than or equal to J less than or equal to M, J not equal to zero of the log of the probability of W times T, plus J, W, T. Okay, so we had that and then we'd had this formula that the probability of the outside word given the context word is this formula we just went through of xu ot vc over the sum of W equals one to the vocabulary size of xu wt vc. Okay, so that's sort of our model. We want to min- minimize this. So, we wanna minimize this and we want to minimize that by changing these parameters. And these parameters are the contents of these vectors. And so, what we want to do now, is do calculus and we wanna say let's work out in terms of these parameters which are, u and v vectors, um, for the current values of the parameters which we initialized randomly. Like what's the slope of the space? Where is downhill? Because if we can work out downhill is, we got just gotta walk downhill and our model gets better. So, we're gonna take derivatives and work out what direction downhill is and then we wanna walk that way, yeah. So, why do we wanna maximize that probable edge and like, like going through every word, it's like [inaudible] given the [inaudible] So, well, so, so, I'm wanting to achieve this, um, what I want to achieve for my distributional notion of meaning is, I have a meaningful word, a vector. And that vector knows what words occur in the context of, um, a word- of itself. And knowing what words occur in its context means, it can accurately give a high probability estimate to those words that occur in the context, and it will give low probability estimates to words that don't typically occur in the context. So, you know, if the word is bank, I'm hoping that words like branch, and open, and withdrawal, will be given high probability, cause they tend to occur with the word bank. And I'm hoping that some other words, um, like neural network or something have a lower probability because they don't tend to occur with the word bank. Okay, um, does that make sense? Yeah. Yeah. And the other thing I was, I'd forgotten meant to comment was, you know, obviously, we're not gonna be able to do this super well or it's just not gonna be able, that we can say all the words in the context is going to be this word with probability 0.97, right? Because we're using this one simple probability distribution to predict all words in our context. So, in particular, we're using it to predict 10 different words generally, right? So, at best, we can kind of be giving sort of five percent chance to one of them, right? We can't possibly be, so guessing right every time. Um, and well, you know, they're gonna be different contexts with different words in them. So, you know, it's gonna be a very loose model, but nevertheless, we wanna capture the fact that, you know, withdrawal is much more likely, um, to occur near the word bank than something like football. That's, you know, basically what our goal is. Okay, um, yes, so we want to maximize this, by minimizing this, which means we then want to do some calculus to work this out. So, what we're then gonna do is, that we're going to say, well, these parameters are our word vectors and we're gonna sort of want to move these word vectors, um, to, um, work things out as to how to, um, walk downhill. So, the case that I'm going to do now is gonna look at the parameters of this center word vc and work out how to do things with respect to it. Um, now, that's not the only thing that you wanna do, you also want to work out the slope with respect to the uo vector. Um, but I'm not gonna do that because time in class is going to run out. So, it'd be really good if you did that one at home and then you'd feel much more competent. Right, so then, um, so what I'm wanting you to do is work out the partial derivative with respect to my vc vector representation of this quantity, that we were just looking at. Which is, um, the quantity in here, um, where we're taking the log of that quantity. Right, the log of the x of u, o, T, v, c, over the sum of W equals one to V of the x of u, o, T, v, c. Okay, so this, um, so now we have a log of the division, so that's easy to rewrite, um, that we have a partial derivative of the log of the numerator minus and I can distribute the partial derivative. So, I can have minus the partial derivative, um, of the denominator, um, which is log of this thing. [NOISE] Okay. Um, so this is sort of what was the numerator and this is what was the denominator. Okay. So, um, the part that was the numerator is really easy. In fact maybe I can fit it in here. Um, so log on exp are just inverses of each other, so they cancel out. So, we've got the partial derivative of U_o T V_c. Okay, so this point I should, um, just, um, remind people right that this V_c here's a vector of- um, it's still a vector right because we had a 100 dimensional representation of a word. Um, so this is doing multivariate calculus. Um, so you know, if you're, if you at all, um, remember any of this stuff, you can say, "Ha this is trivial". The answer to that is you are done, um and that's great. But you know, if you're, um, feeling, um, not so good on all of this stuff, um, and you wanna sort of, um, cheat a little on the side and try and work out what it is, um, you can sort of say, "Well, let me um,, work out the partial derivative, um with respect to one element of this vector like the first element of this vector". Well, what I actually got here for this dot product is I have U_o one times V_c one, plus U_o two times V_c two plus dot, dot, dot plus U_o 100 times V_c 100, right, and I'm finding the partial derivative of this with respect to V_c one, and hopefully remember that much calculus from high school of none of these terms involve V_c one. So, the only thing that's left is this U_o one, and that's what I've got there for this dimension. So, this particular parameter. But I don't only want to do the first component of the V_c vector, I also want to do the second component of the V_c vector et cetera, which means I'm going to end up with all of them turning up in precisely one of these things. Um, and so the end result is I get the vector U_o. Okay. Um, but you know, if you're sort of getting confused and your brain is falling apart, I think it can be sort of kind of useful to re- reduce things to sort of um, single dimensional calculus and actually sort of play out what's actually happening. Um, anyway, this part was easy. The numerator, we get um, U_o. Um, so things aren't quite so nice when we do the denominator. So we now want to have this, um, B_d, V_c of the log of the sum of W equals one to the P_x of U_o T V_c. Okay. So, now at this point, I'm not quite so pretty. We've got this log sum X combination that you see a lot, and so at this point you have to remember that there was E, the chain rule. Okay. So, what we can say is here's you know, our function F and here is the body of the function, and so what we want to do is um, do it in two stages. Um, so that at the end of the day, we've got this V_c at the end. So, we have sort of some function here. There's ultimately a function of V_c, and so we gonna do with a chain rule. We'll say the chain rule is we first take the derivative of this outside thing putting in this body, and then we remember that the derivative of log is one on X. So, we have one over the sum of W equals one to V of the exp of U_o T V_c and then we need to multiply that by then taking the derivative of the inside part which is um, what we have here. Okay. Times the derivative of the inside part with the important reminder that you need to do a change of variables, and for the inside part use a different variable that you're summing over. Okay. So, now we're trying to find the derivative of a sum of X. The first thing that we can do is v-very easy. We can move the derivative inside a sum. So, we can rewrite that and have at the sum first of the X equals one to V of the partial derivatives with respect to V_c of the [inaudible]. Um, so that's a little bit of progress. Um and that point we have to sort of do the chain rule again, right. So, here is our function and here's the thing in it again which is some function of V_c. So, we again want to do um, the chain rule. So, [NOISE] we then have well, the derivative of X um, is exp. So, we gonna have the sum of X equals one to V of exp of U_x T V_c, and then we're going to multiply that by the partial derivative with respect to T V_c of the inside U_x T V_c. Well, we saw that one before, so, the derivative of that is U- well, yeah, U_x because we're doing it through a different X, right. This then becomes out as U_x, and so we have the sum of the X equals one to V of this exp U X T B C times the U_of X. Okay. So, by doing the chain rule twice, we've got that. So, now if we put it together, you know, the derivative of V_c with respect of the whole thing, this log of the probability of O given C, right. That for the numerator it was just U_o, and then we're subtracting, we had this term here, um, which is sort of a denominator, and then we have this term here which is the numerator. So, we're subtracting in the numerator, we have the sum of X equals one to V of the exp of U_x T V_c times U_x, and then in the denominator, we have um, the sum of W equals one to V of exp of U_w T V_c. Um, okay, so we kind of get that. Um, oh wait. Yeah. Yeah, I've gotten. Yeah, that's right. Um, okay. We kind of get that and then we can sort of just re-arrange this a little. So, we can have this sum right out front, and we can say that this is sort of a big sum of X equals one to V, and we can sort of take that U_x out the end and say, okay. Let's call that put over here a U_x, and if we do that, sort of an interesting thing has happened because look right here, we've rediscovered exactly the same form that we use as our probability distribution for predicting the probability of words. So, this is now simply the probability of X given C according to our model. Um, so we can rewrite this and say that what we're getting is U_o minus the sum of X equals one to V of the probability of X given C times U_x. This has a kind of an interesting meaning if you think about it. So, this is actually giving us, you know, our slope in this multi-dimensional space and how we're getting that slope is we're taking the observed representation of the context word and we're subtracting from that what our model thinks um, the context should look like. What does the model think that the context should look like? This part here is formal in expectation. So, what you're doing is you're finding the weighted average of the models of the representations of each word, multiplied by the probability of it in the current model. So, this is sort of the expected context word according to our current model, and so we're taking the difference between the expected context word and the actual context word that showed up, and that difference then turns out to exactly give us the slope as to which direction we should be walking changing the words representation in order to improve our model's ability to predict. Okay. Um, so we'll, um, assignment two, um, yeah. So, um, it'll be a great exercise for you guys, um, to in- um, to try and do that for the cen-, wait, um I did the center words trying to look context words as well and show you that you can do the same kind of piece of math and have it work out. Um, if I've just got a few minutes left at the end. Um, what I just wanted to show you if I can get all of this to work right. Um, let's go [inaudible] this way. Okay, find my. Okay. Um, so I just wanted to just show you a quick example. So, for the first assignment, um, again it's an iPython Notebook. So, if you're all set up you sort of can do Jupyter Notebook. Um, and you have some notebook. Um, here's my little notebook I'm gonna show you, um, and the trick will be to make this big enough that people can see it. That readable? [LAUGHTER] Okay, um, so right so, so Numpy is the sort of, um, do math package in Python. You'll want to know about that. If you don't know about it. Um, Matplotlib is sort of the, one of the most basic graphing package if you don't know about that you're going to want to know about it. This is sort of an IPython or Jupyter special that lets you have an interactive matplotlib um, inside. And if you want to get fancy you can play it- play with your graphic styles. Um, there's that. Scikit-learn is kind of a general machine learning package. Um, Gensim isn't a deep learning package. Gensim is kind of a word similarity package which started off um, with um, methods like Latent Dirichlet analysis. If you know about that from modelling words similarities that sort of grown as a good package um, for doing um, word vectors as well. So, it's quite often used for word vectors and word similarities that sort of efficient for doing things at large-scale. Um, yeah. So, I haven't yet told you about will next time we have our own homegrown form of word vectors which are the GloVe word vectors. I'm using them not because it really matters for what I'm showing but you know, these vectors are conveniently small. It turns out that the vectors that Facebook and Google distribute are extremely large vocabulary and extremely high dimensional. So take me just too long to load them in the last five minutes of this class where conveniently uh, in our Stanford vectors we have a 100 dimensional vectors, um, and 50 dimensional vectors which are kinda good for doing small things on a laptop frankly. Um, so, what I'm doing here is Gensim doesn't natively support GloVe vectors but they actually provide a utility that converts the GloVe file format to the word2vec file format. So I've done that. And then I've loaded a pre-trained model of word vectors. Um, and, so this is what they call a keyed vector. And so, the keyed vector is nothing fancy. It's just you have words like potato and there's a vector that hangs off each one. So it's really just sort of a big dictionary with a vector for each thing. But, so this model has been a trained model where we just use the kind of algorithm we looked at and, you know, trained at billions of times fiddling our word vectors. Um, and once we have one we can then, um, ask questions like, we can say, what is the most similar word to some other words? So we could take something like, um, what are the most similar words to Obama let's say? And we get back Barrack, Bush, Clinton, McCain, Gore, Hillary Dole, Martin, Henry. That seems actually kind of interesting. These factors from a few years ago. So we don't have a post- post-Obama staff. I mean if you put in another word, um, you know, we can put in something like banana and we get coconut, mango, bananas, potato, pineapple. We get kind of tropical food. So, you can actually- you can actually ask uh, for being dissimilar to words. By itself dissimilar isn't very useful. So if I ask most similar and I say um, negative equals, um, banana, um, I'm not sure what your concept of what's most dissimilar to, um, banana is, but you know, actually by itself you don't get anything useful out of this, um, because, um, you just so get these weird really rare words um, which, um, [LAUGHTER] definitely weren't the ones who are thinking of. Um, but it turns out you can do something really useful with this negative idea which was one of the highly celebrated results of word vectors when they first started off. And that was this idea that there is actually dimensions of meaning in this space. And so this was the most celebrated example um, which was look, what we could do is we could start with the word king and subtract from it the meaning of man and then we could add to it the meaning of woman. And then we could say which word in our vector space as most similar in meaning to that word. And that would be a way of sort of doing analogies. Would be able to do the, um, analogy, man is the king as woman is to what? And so, the way we're gonna do that is to say we want to be similar to king and woman because they're both positive ones and far away from man. And so, we could do that manually, here is said manually, most similar positive woman king, negative man. And we can run this and lo and behold it produces queen. To make that a little bit easier I defined this analogy, um, analogy predicates so I can run other ones. And so I can run another one like analogy Japan Japanese, Austria is to Austrian. Um, and you know, I think it's fair to say that when people first saw that you could have this simple piece of math and run it, and learn meanings of words. I mean it actually just sort of blew people's minds how effective this was. You know. Like there- there's is no mirrors and strings here, right? You know it's not that I have a separate- a special sort of list in my Python where there's a difficult I'm looking up, er, for Austria Austrian, uh, and things like that. But somehow these vector representations are such that it is actually encoding these semantic relationships, you know, so you can try different ones, you know, like it's not that only this one works. I can put in France, it says French. I can put in Germany, it says German, I can put in Australia not Austria and it says Australian, you know that somehow if you want this vector representations of words that for sort of these ideas like understanding the relationships between words, you're just doing this vector space manipulation on these 100 dimensional numbers, that it actually knows about them.This not only the similarities of word meanings but actually different semantic relationships between words like country names and their peoples. And yeah that's actually pretty amazing. It really-you know, it's sort of surprising that running such a dumb algorithm on um, vectors of numbers could capture so well the meaning of words. And so that's sort of became the foundation of a lot of sort of modern distributed neural representations of words. Okay I'll stop there. Thanks a lot guys and see you on Thursday. [NOISE]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2020_Low_Resource_Machine_Translation.txt
For today, I'm really delighted, um, to introduce our third guest speaker, who's Marc'Aurelio Ranzato. So he's originally from Italy and then worked at NYU with Yann LeCun, and then has a post-doc with Geoffrey Hinton. So he's a very dyed in the wool deep learning researcher. A lot of his original work was in the areas like feature learning and vision, but over the last few years he's really turned his interests to natural language processing. And in particular, um, in the last few years, he's worked a huge amount in looking at machine translation in general and in particular machine translation for languages for which less resources are available. So I saw a talk of his about six months ago on this topic. And, um, through him and his team at Facebook, they've really got a lot of exciting new work in ways to bring neural machine translation up to the next level. And so I hope that this would be a really great opportunity for everyone to see some of the latest and most exciting techniques in neural machine translation. That's sort of - of a next level beyond what we talked about, and you guys all did on assignments 4 and 5 of the class. Um, so take it away, Marc'Aurelio. Okay. Thank you so much, Chris, for inviting me. Um, let me just put my face. I'm here. [LAUGHTER] Hi everybody. I'm gonna disable it now so you can focus on the presentation. Um, so share. I hope you should be able to see my presentation now. Okay. So I'm very excited to tell you a little bit about low resource machine translation. And let's start by, uh, revisiting, uh, the machine translation problems. So let's say that we want to translate between English and French. And, uh, we started with a big, uh, training set where we have a collection of sentences, uh, in English with their corresponding translation in French. And this is what we call a parallel data set. And in particular the sentences in English, we call them source sentences, right? And the corresponding sentences in French are, uh, what we call the target sentences. And now, uh, the learning problem is about for a given, uh, sentence in English, you want to predict the corresponding, uh, sentence in French. And, uh, the way that we do that is by minimizing the cross-entropy loss, which maximizes the low probability of the reference human translation given the input source sentence. And we do this by stochastic gradient descent, using, uh, as architecture a sequence to sequence with attention that as far as I know, you started and- and- and you had a homework on a few weeks ago. And then after you train this, at this time, you are given a normal English sentence and you want to produce the corresponding translation. And in order to do that, um, we usually, uh, employ a heuristic, uh, search method like beam that tries to find, uh, approximately the target sentence that maximizes the low probability given the- uh, the given, uh, source sentence. So this is at the high level how machine translation works. And let's think about the assumptions that, uh, we have been making, uh, through this discussion. So the first assumption is that we are working with two, uh, fairly related languages like English and French. And the second, uh, assumptions is that we have at our disposal, a large data set of parallel sentences. Because here we are essentially doing, uh, supervised learning, right? And it is a beautiful example of end-to-end supervised learning, that relies on the availability of a large parallel dataset. And so in the world there are more than 6,000 languages. Um, and needless to say, most of these languages don't belong to the European family for which much of the recent research on machine translation has been focusing on. And even if you look at English, English is spoken by less than 5% of- as native languages speak- is spoken by less than 5% of the world population. And so, uh, if you were to count how many people speak a certain language and you look at that histogram, it's a very heavy tailed distribution. So even if you take the top 10 spoken languages, you find that this accounts for less than 50%of the people in the world. And, um, now if you look at the very far right of the tail, those are languages for which there are very few speakers and essentially there is no digitized data material for you to train anything. So for- for those I think it's almost hopeless, I would say. But in the middle of this tail, we have a lot of languages for which there is some, uh, digital data and for which, uh, we don't have good ways to translate nowadays, if you think about major providers like Google, Yandex, Baidu, Facebook, and so on, and so forth, they provide translation for the top 100 languages. So we are still very much at the far right of this normal distribution. And so, uh, if- if we're able to, uh, um,. Improve machine translation in the middle, I think we could do- uh, it would be very impactful, right? But so what happens as we walk down these tails? So what happens is that the amount of data or parallel data decreases and, uh, that correlates very much with the quality of the automatic machine translation systems that we have. And particularly as you can see here, at some point there is actually a drastic drop in accuracy of your machine translation system. So, um, so perhaps the initial, um, picture that we had in mind is a little different. So now if we take a fairly low resource language like Nepali, which is the language spoken in Nepal, a lovely country northeast of India, um, with more than 25 million people. So it's not as handful of people. Uh, first of all, the amount of training data is not as much as English. French is much, much less than that. And here, uh, let's use a different visual representation. So let's use, um, uh, few rectangles with a color that corresponds to the language. So the blue rectangle is English data and the red rectangle is Nepali data. Now, in practice, the parallel dataset is not just such a monolithic thing because some part originates in English and some parts originates in Nepali. And now let's represent the Nepali translations of English data with an empty rectangle where the color corresponds to the language and whether you fill it or not depends whether this is translation is, so whether this is a human translation or whether it is, uh, um, data originating in- in the language. So in this case, we take, uh, data that originates in English and we translate it into Nepali. And so this is the empty red rectangle and the same for when you go from Nepali to English. Now, in general, the data that originates in English and the data that originates in Nepali come from different- may come from different domains. So here on the y-axis you have the domain. And so in this example that I totally made up, but it's pretty, um, uh, indicative of what happens in practice. You may have that English sentences may come from, let say Bible. And so the Nepali here are translations from, uh, the Bible. And, uh, the Nepali sentences may come from parliamentary data, okay? So you may agree with me that translating a normal sentence from the Bible is not a super interesting task because the Bible is a pretty static dataset, right? And so maybe we want to translate news data. And, uh, but so, uh, in practice we don't have any, uh, parallel data in the news domain, perhaps what we have, so what we really want to do at the end is translate sentences from this test set, that is English news into Nepali. But all we have in the news domain is at most monolingual data both in English and in Nepali. So these are English sentences that are not aligned at all with the Nepali, uh, sentences over here. Here it just happened to be just data that you got from new sources. Okay? And so this is a pretty complicated learning setting because you have a little bit of parallel sentences and, uh, that are in a different domain from the test set, and, uh, all you have in the domain of interest is monolingual data. And in fact, you may have also some other parallel data but in another language, let's say Hindi that is in the same family as Nepali but maybe these parallel data is in a different domain, let say books. And perhaps you have also monolingual data in Hindi that is also in the book domain. So in fact, what you really- [LAUGHTER] in- in practice, what you'll find is that you may have a lot of languages here from which you could learn and- and a lot of domains. And all you want to do at the end is to be able to translate news data in English into Nepali. But you don't have any supervision for that. You don't have any labeled data, any parallel data for that. All you have is a bunch of data in different domains and in different languages. And so the question is, how can you leverage all these data in order to perform your original translation task? And so this is a- a Mondrian like learning setting, which is pretty tricky. And this is going to be the topic of this lecture. And so, um, there is not a very, um, clear definition of what Low Resource Machine Translation is but, loosely speaking, uh, a language pair can be consider low-resource when the number of parallel sentences in domain is less than 10,000. Okay? And as, uh, order of magnitude. And, uh, and this is very little, particularly if you think that modern neural machine translation systems have easily hundreds of millions of parameters. And so there are several challenges. There are challenges that pertain to data and challenges that pertain to the model design. So in terms of the data, it is very hard to get data to train, right? It is very hard to figure out where to get the data to train, data that is in- in a domain similar to the domain that you are interested in eventually translating. Uh, if that doesn't exist, how to get data in similar languages on other domains and even how to get data to evaluate your system on. And on the modeling side, uh, there is the question of, of course, how to learn with, uh, so little supervision, so little direct supervision at the very least, and how to operate in this, um, framework for which you have so many languages and so many domains. So, uh, so as, uh, Chris mentioned at the very beginning, my background is not really NMT. My, um, I've always been interested in learning with less supervision. And I think working low resource machine translation is, at least personally, a very unique opportunity. It's a very rare case in which my research agenda is aligned with an application [LAUGHTER] because, um, in low resource machine translation you don't have much level data and you need to make the best use of auxiliary tasks and au- auxiliary data in order to, uh, perform well. And this is a general problem. And at the same time, machine translation is a real application, it's something that if we improve we can really, uh, have a chance to improve, uh, a lot of applications and- and- and the life of a lot of people. So, uh, this concludes my introduction about low resource machine translation and, um, the issues that we face when working on these languages. Before, uh, and let me just, uh, pause for a second, seeing the- the outline of this talk goes around three pillars that in a way define, uh, the cycle of research. So the first pillar is data. So, um, I'm going to review, uh, how we can get data in particular for evaluation. So, uh, data is the prerequisite to do anything in our life as machine learner practitioners, right? And then, uh, afterwards I'm gonna move to the modeling, so, uh, describing some, uh, algorithms to, uh, learn, um, on low resource languages. And finally, I will conclude with some work on analyzing, uh, what a model does when we train on low resource languages. And in practice, like, uh, throughout my, um, work here, I keep going around the circle because as I figured out, uh, the issues that we have with the data, with the model, then I make a mark with a dataset that better fits the kinds of problems that I'm interested in, and then I may, uh, go back to the modeling side to improve the models and- and so on and so forth, okay? And here I'm giving some references of the works that I'm presenting, not all of them. And just so- uh, and just to be clear, these are- this is not meant to be a chronological survey, so these are not necessarily the works that introduce a certain- a- a certain idea, but it's just, I would say, the most accessible entry points on the topic, and then you can go on the related work sections to figure out, uh, if there was, eh, some seminal paper that led to- to- to that line of research. And of course, there is quite a bit of presenter bias because most of these, uh, works are- are being co-authored by me, so, uh, be mindful of that. That said, do you have any questions so far? Uh, I have a quick question about, uh, I- I see people posting the model about Phrase-based and Neural Unsupervised MT. I was wondering if you could talk about the different approaches about unsupervised learning, and also, uh, a- algorithms like GLoVe and Word2vec are possible in low resource languages. Yeah, yeah, so, um, Yuri, I- I- in this lecture I'm gonna focus on neural more. Actually I'm not even going into details of the architecture, I'm more talking about algorithms actually. And so these algorithms are applicable both to neural machine translation system as well as through statistical machine translation systems. Uh, when I go over this part I can, um, uh, address a little bit your question and tell you a little bit about the differences between the- these two. Um, uh, and then in terms of, um, uh, methods to learn word embeddings and- and- and sentence embeddings, I'm gonna touch very briefly on that. So at the end of the, er, lecture, I'm gonna, uh, refer to some recent work on filtering where people use, um, sentence embedding methods. It's not GloVe but it's, uh, something, uh, similar in the way. Um, in practice, uh, for word embeddings, um, it's kind of, I would say, um, a prerequisite for machine translation because if you can align word embeddings, you learn a dictionary, and that's, uh, a primitive way to do machine translation. So oftentimes we look at those things as a, um, good sanity check or as a simplified machine translation task. Whenever you have a reference dictionary for which you can then, um, check the accuracy of your alignment. But- so if you- let me get back to you when, uh, we talk about, uh, uh, this paper, okay? So let's talk about data then. So, uh, let's go back to, uh, our English, Nepali, um, um, translation task. So there is, um, er, um, a resource called OPUS, which is a very nice, uh, which hosts a very nice collection of datasets, all publically available in- in lots of languages. And, uh, when you go to this website, the OPUS website, you find that for English, Nepali, actually there are 1 million parallel sentences. So maybe I lied to you, telling you that this is a low resource language. But if actually- if you look at what this corpus are, you realize that pretty much half a million of these sentences come from, uh, JW 300, uh, and, uh, which is a religious magazine, and then you have 60,000 sentences from the Bible, and the rest come from GNOME, KDE, Ubuntu, so these are computer related, um, materials, right? And so again, unless you're interesting in translating novel sentences from the Bible, uh, this is not, um, super useful, I would say. Um, and, uh, so one thing to notice that all this data originates from English, we have nothing that originates from Nepali, first of all. And second of all, if you are interested in, let's say translating Wikipedia, all you have is Wikipedia monolingual data both in English and Nepali, and Nepali is not even very much. And then of course you can add some monolingual data in another domain like Common Crawl, which is just a dump of the Internet. Uh, but again, uh, translating between English and Nepali using publicly available data is going to be a challenge because you don't have any in domain parallel dataset, okay? All you have is at most some in domain monolingual data. But there is an even bigger problem, which is that there is no test data, right? So here we don't have reference translations in Nepali to measure the quality of our machine translation system. And this is a big problem because if you don't have a high-quality or you don't have at all, er, the set, it's very hard to compare models and it's very hard to do model selection to compare algorithms and- and our, uh, field is crippled. We need, uh, strong evaluation, uh, benchmarks. And so this motivator, a project that's called FloRes, that stands for Facebook low resource- low resource, um, evaluation benchmark for machine translation, uh, where we took, uh, Wikipedia sentences in English and translated them into Nepali and Sinhala, and then we took, uh, Wikipedia. We - we took sentences from Nepali Wikipedia and translating them into English as well as from Sinhala Wikipedia and translated them into English. Okay. So you may say this is a little bit boring because what's hard about it? And [LAUGHTER] and tell me about, um, you know, tricks to do better modeling. But actually, you'd be surprised that this data collection process was harder and more interesting also than we thought. So it is hard because there are very few, ah, fluent professional translators in these languages and this is not even super low resource, right? And so, since, ah, so, ah, we dealt with a translator agency and typically there are not enough people, ah, for which you can do kind of AB testing to test the translation of one person with another one. That's number one. Number two in general, it's very hard to assess automatically the quality of the translation because we don't have enough parallel data to train machine translation system, right? And so we need to rely on other methods than a well-trained machine translation system to assess the quality. And so, ah, we build a pipeline where we would have, ah, we would send the sentences to the translators. Once the translations are back, we would, um, do several checks like fluency checks, ah, using, ah, a language model. We would check for transliteration to make sure that a sentence is not translated by simply transliterating. We would check that, ah, the language is the desired one, right? And so we will have a lot of checks like that. And then, if and of course here there are thresholds that you need to set somehow. And then, ah, for those sentences that would fail, ah, this, ah, this step we would send them back to, ah, re-translation. And so after a few iterations of this, then eventually we do also a human evaluation. And then, ah, the sentences in this evaluation benchmark are those that are, that have passed, that I've, um, passed all the automatic and human assessment checks. Now, um, it turns out that there is not even very good literature that tells you how to collect data. And in particular, for low-resource languages, there are a lot of issues related to the quality of the translations. And so this was a process that we thought would take us a couple of months, but instead it took us more than six months. Um, and, but that said, eventually we got a validation set, a test set, and also a hidden test set because we used this data for a WMT competition and, um, for that, they needed to have a test set that was not available to people to make sure that people were not, ah, cross-validating on the test set. And, um, here are some examples of sentences: So this is from a sentence from the Sinhala Wikipedia, translating into English a couple of sentences here, and this from English Wikipedia translating into Sinhala. I don't know how many people in the audience come from Sri Lanka, ah, that could appreciate [LAUGHTER] this set. But, um, one interesting thing that you can already see is that, if you, although this is totally anecdotal, because it's just a couple of sentences from, for Sinhala and English, you can see that - the topic, kind of the topic distribution is different. And - and here you have things that, ah, would be a little unlikely in English Wikipedia. And the same is for, ah, Nepali English and - and English Nepali. So, ah, we have a GitHub repository where we host the data and also baseline models that we train on, ah, publicly available data and then tested on this FloRes benchmark. And, uh, last week we released another couple of languages: English Pashto and English Timor, and we are adding more and more languages, uh, in the coming months. So, um, the point of this, ah, section is just to say that data is oftentime more important than designing a model because without data in particular, without a good evaluation benchmark it's essentially impossible to do research in - in this area. And collecting data is not trivial. It's not trivial. The process the - that you use is - is not, ah, well-established and, ah, and in practice, it - it is hard to do. And another, ah, thing to consider, sorry, is to, ah, look at the data, look at the data when you collect it, and also before you start training your model because you may realize some issues with the quality of the translations if you speak the language, oftentimes, English is on one side. And, or may, you may discover biases or you may discover, ah, interesting things. So always look at the data, ah, as opposed to just apply your matter in a black box way. Um, that concludes my, ah, little, um, discussion of the data part. Are there any questions on this? Why don't people talk about building a language model for low resource languages? Yeah, yeah, yeah. So in this case what we did, ah, actually we took, ah, the Common Crawl data. And I think I actually don't remember exactly. So for Nepali I think we had to, ah, concatenate the Wikipedia data and the Common Crawl data because the Wikipedia data was just too small. And we simply train a count-based n-gram. And then, the count-based n-gram gives you, I don't know if you study this, but it gives you the probability of one word given some fixed window of context. And then, ah, for a given sentence, we put like, let's say, what is it? For a given sentence, you would, um, compute a score for every board, and then the score of a sentence is simply the average, ah, low-probability score, ah, across all the words in the sentence and that will give you a score and then we will simply have a threshold on that. And so all the sentences that would score too low, that would be deemed not fluent enough, would be sent for rework. But, of course, um, whenever you have an entity, whenever you have, ah, you know, it's not super reliable and if you go on languages that are even lower resource than Sinhala than you have, you don't even have really in domain data like Wikipedia is not in all the languages, and then it becomes even harder. And so, oftentimes, so now that we are scaling this up, we are looking at, um, language models, neural language models that are trained in a multilingual way and that are fine tuned on a small, ah, in domain, ah, limiting what they decide if available. But yeah. Also this type is not particularly, um, obvious how to do it. Yeah, sure. So thank you for like this amazing result. But, I just wanna comment because like, I've noticed that like Wikipedia actually will have like different content with different language you choose. So for example, they'll have like very detailed, like description of some like basically topic. And then in other languages, even if with like really commonly used language like Chinese, they'll actually just have completely different content or basically simplified content. So I'm like pretty sure this also gonna happen with like rarely used languages. So yeah, I - I just, I just generally think that like Wikipedia might not be like basically. [OVERLAPPING] You might not be like very direct reference to the tran - translation. Yeah. Yeah, yeah, so it - it's an excellent point and this is something that I'm going to discuss more in the third part of the lecture. And, um, in a way, ah, this is the translation problem, right? So we need to accept the fact that content that is originated in a certain language may have a different topic distribution than content that originates in another language. And what you want to translate is really a content that originates in - in the source language. Right? And so you need to - to live with it, that that's, ah, that's what it is. [LAUGHTER] So oftentimes in, um, in the public benchmarks in - in the literature, you find that people assume that corpora are comparable. So everything that originates in English, and - and let's see, Nepali essentially comes from the same kind of sources. So it's news and it's all news, talking about similar things. But in practice, this is not true, it's not true for Wikipedia, as you correctly said, but it's also true for news, right? Because if - if local news in, ah, Nepal and local news over here, ah, it's quite different, right? So this is a general problem, yeah, and this has implications in terms of the matters that we are going to use as we will discuss later. Other questions? Or- I'm not sure if I was clear. It's really hard to [LAUGHTER] speak without seeing [LAUGHTER] without feedback. [LAUGHTER] Uh, please- please let me know if- if- if anything is- is not clear. Okay. Let's talk about modeling. And this is going to be, um, most of, uh, our- uh, where we are going to spend most of our time. So remember that we have this, uh, funky chart where we have domain and languages and it's a pretty complicated learning setting. And here for simplicity, we are going to focus just on English and Nepali languages. Um, and we start with the simplest setting ever, which is Supervised Learning. Assuming that all data is in the same domain. So perhaps you have a small training set and the test set is in the same domain as the training set- as the parallel training set, okay? So we denote as x, the source sentence as y, the target sentence d. So d is, uh, the parallel dataset that collects all these sentence pairs, right? And so this is the typical empirical risk minimization framework whereby you, uh, you know, you do supervised learning, in this case,p you minimize the cross-entropy loss and you want to maximize the probability of the target sentence given the source sentence. And so a way to visualize this is to say that, uh, x, uh, is my English sentence. It goes to my encoder, decoder NMT system that produces a prediction, and then we have a loss that measure the discrepancy between the human reference that, you know, you took the sentence x, you asked, uh, your translator that gave you the human reference. And so the cross-entropy loss measures the discrepancy between the model prediction and the human reference, right? Um, now, uh, notice that here I'm denoting with boxes. Uh, [LAUGHTER] now, uh, model components, in this case, the blue box is the encoder that processes English, er, sentences, and the red box is the decoder that, uh, operates in Nepali. And, uh, I just wanted to add one more thing, which is that if you don't have a lot of parallel data, you need to regularize. And so you can do a word k, uh, which is pretty standard. So you kind of minimize the L2 norm or the parameters, but there are also other methods that I think, uh, in the machine learning class, you may have seen like dropout where you set to zero at random, uh, hidden units in your encoder decoder. Or you can do label smoothing whereby you, um, uh, in your cross-entropy loss instead of- actually it should be more more over here. Uh, instead of setting, uh, um, as a target, uh, for the correct word. So this is, uh, the probability over the whole sequence which you can factorize over, uh, each individual word by the product, uh, rule. Uh, so for every word, you- you- you have the correct, uh, word that you want- sorry. At every timestamp, you want to predict the next word. And now instead of assigning 100% of probability on the, uh, next word, you- let's say you assign 90% of the probability and the remaining 10% you evenly distribute across all the remaining words so that the model is not too overly confident. So the combinations of these two things are usually good ways to, um, regularize the system, okay? So that's the simplest setting. Now let's see what happens when we have also some source side, uh, monolingual data. So here now we have a- an additional dataset that has only, uh, sentences in the source- in the source language, English. So in addition to d, now we have also, uh, M_s, which is the monolingual data on the source, uh, side. And so we have a bunch of X's. So typically, M is much greater than N, right? And now, a typical way to use, uh, uh, this data is to model the marginal distribution of the data of x, right? And so there are many ways to do that. One way that has proven to be pretty effective in machine translation is to do denoising autoencoding. And so here the idea is that, um, you have something similar to what we had before, except that now the input is taken from this monolingual dataset, okay? And you add noise to it, and I'm going to describe the noise in a- in a second. And then the, er, job, uh, of the encoder decoder is simply to denoise the noisy input. And the cross-entropy loss measure the discrepancy between the cle- the prediction and the actual clean input. But now notice that the decoder is not this red decoder because the decoder now is a decoder that operates in English, but the encoder does not. The encoder is- uh, is, uh, the same that you have seen here. So again, the- uh, the loss function here is, er, very similar to before, except that, uh, the target is the clean input x, and the input is, uh, a noisifed version of x. So in this case, we are not predicting something in Nepali but something in English. So this is a, uh, if you want the limitation of- of this work, but this is useful because you are anyway, doing some good modeling of, uh, the input sentences, and you're gonna train the encoder parameters that are going to be shared with your supervised system. So the encoder is shared between, um, uh, the translation task on parallel data, right? And the denoising auto-encoder task. So essentially you have an encoder and two decoders. One that operates in Nepali, one that operates in English. And, um, so in terms of noise, there are essentially two types of noise that we have been using in our work. Others are possible, but, uh, in the simplest case, you can drop words or swap words. So assume that the input sentence is, "The cat sat on the mat." Then if you swap words, you may, uh, provide at the input "The cat the on sat mat." And so here the encoder decoder needs to understand a little bit of the- the syntax, the grammatical rules in order to reorder. If you drop- let's say you drop the last word, "The cat sat on the," then, uh, the model needs to understand a little bit of the semantics because it needs to assign, a higher probability to mat now, right? And so you can see that there is a little bit of, uh, so there are two hyperparameters here. So one- actually, there are several ways to use denoising auto-encoding. So you can use denoising auto-encoding as a way to pre-train the encoder. Or you can use it, uh, as auxiliary loss when you do supervised learning. So you can have this term plus Lambda, this term, okay? So both ways are fine. And, uh, so there is a very critical hyperparameter here, which is the level of noise. If you don't have any noise or if the noise level is too low, then this task is trivial because of the attention, you can simply copy the input. And so the encoder and the decoder don't need to learn anything. If the noise level is too high, then you destroy the input here. So the encoder is not useful and you just do language modeling using the decoder. But remember that this decoder is then, uh, not use- used for translation because, uh, what you use for- in machine translation system is- is the encoder box, right? The encoder module. Okay, so, uh, there are other ways to use source-side monolingual data. In addition to denoising of encoding, you could also do, uh, self-training, which is a method that comes from the '90s, if not earlier. And the idea is very simple. So again, you take a sentence from your source-side monolingual dataset and you add noise to it, and then you have an encoder decoder that tries to this time translate from this noisy input, okay? And now what's the reference? The reference is given by a stale version of your machine translation system, okay? Where the reference is produced by, let's say beam. And so, uh, the cross-entropy loss is then going to measure the discrepancy between your prediction and what the prediction from- from a stale version of your system gave. And the reason why this works is that when you do beam, you actually typically, um, produce better quality, uh, outputs. And so when you train, now this encoder-decoder by cross-entropy loss, you're going to learn the decoding process, okay? And so this is something good for you. In addition, when you train, you inject noise and, and the noise is regularizing, it's kind of smoothing out your prediction space. And so if you're predicting correctly one sentence now also nearby sentences, and by nearby I mean sentences that are similar phrases so they have a good overlap with the current sentence, are gonna be more likely predicted correctly. And so we have this paper where we analyze a little bit, uh, these aspects. And so the algorithm is very simple. And so first you train your machine translation system on the parallel data, and then you repeat the following process. So first you decode your monolingual dataset using your current, uh, machine translation system. And you make a new, uh, parallel data set of sentences from your monolingual dataset with, sorry, with the, ah, translations from your current system. And then you retrain the model, this p of y- p of y given x on the union of your original parallel data plus this, uh, auxiliary dataset. And so here you have two hyperparameters. One is the noise level and the other is the hyperparameters that weight this, ah, auxiliary dataset. So this is set training loss, okay? Now let's- so that's- that concludes how we can use a source-side monolingual data. Let me say a word about how we can use target-side monolingual data. So you could use the target-side monolingual data, uh, to train a language model and then train the machine translation system in the residual space. So this language model. But it turns out that there is a much more effective way to leverage this data, and that's called back translation. So at the high level, it works as follows. So you take a sentence from your target-side monolingual data set, y_t here. And on the parallel dataset, you train also backward machine translation system that goes from Nepali to English, okay? So that's- so now you have a red encoder that takes Nepali and, and the blue decoder that was in the English space and so you map the sentence into English, this Nepali sentence into English. And now this may not be a correct translation but it's a noisy input that you feed to your encoder-decoder that you want to train, right? And so now the input is noisy, but the target here is clean because it comes from the original target-side monolingual dataset. And so this is a very powerful algorithm because, um, unlike cell training, here the targets are clean but the input is a little noisy, and that's usually much better than having clean inputs but noisy targets, right? Because the targets affect essentially all the other signals that you backpropagate through the NMT systems. And this is- and you can see back translation is a way to do data augmentation because you produce noisy version of inputs, ah, for a given target, a little bit like in vision where they do, uh, well, I guess this is not the right audience to, to do this analogy, but it should work on vision you- you will do scaling, rotation, different cropping, and that's a little bit similar to what we are doing here. And so the algorithm again is, you train, er, backward and then forward machine translation system on the parallel data. And then you use your backward model to decode the target-side monolingual dataset to produce an auxiliary parallel dataset. And then you concatenate the, the, the two datasets, the original parallel data set and the auxiliary one to train the new foreword model, okay? Of course you can combine set training and back translation. So if you have both source monolingual dataset and target monolingual dataset, you can do the following. So, uh, you can use the parallel data to train the forward and the backward machine translation system and then at step two you can use the forward model to decode- to translate the source-side monolingual dataset into, uh, this data. And you can use the backward machine translation system to translate the target-side monolingual dataset into these, uh, translations, okay? And then you treat these parallel sentences as real data and you concatenate them to the parallel dataset. And now you retrain both the forward and the backward machine translation systems. And now as long as these two improve, then you can go and do another iteration whereby you, again, you read the code, retranslate the source and the target-side monolingual dataset and then you go, ah, and you retrain them. And this is as far as I know, the most effective way to leverage monolingual data in low-resource languages nowadays. Lemme talk a little bit about how we can do multilingual training. So in this case we have parallel datasets on different language pairs. And, uh, so you have a parallel dataset for English-Nepali, one for English-Hindi, one for Hindi-English, or Nepali-Hindi, or any subset of these. And this is super simple. So the way that it works is that, you have a single encoder and a single decoder, okay? And you train by supervised learning. The only change that needs to be made is that, at the input of the encoder you concatenate also a token that specifies the language in which you want to translate. And so the encoder will learn to process multiple languages. The decoder will learn to the- to produce multiple languages as one and it will pick the language based on the token specified by the encoder input. And so training is just minimizing the cross-entropy loss for all the parallel datasets that you have where you simply add an extra token, ah, in the source sentence that specifies the target, uh, language that you want to translate. And the only thing that I wanted to add on this is that often times it helps if you pre-process the data by- I'm not sure if you learn about bipolar encoding sentence pieces, essentially ways to segment words into syllables or frequent, um, character n-grams. And so if you concatenate this data in order to learn these ways to segment the data, then it's also possible that for many languages there is a good fraction of the dictionary that is shared. And so this also helps, making sure that you can do a good job translating multiple languages at one- at once. And so my- the conclusion so far is that even without domain effect, there are a lot of training paradigms depending on the available data that you have. A priori, it's very hard to tell which method works best nowadays because it really depends on how much data you have, how different are the domains and- and what is the language pair that you are working with. For instance, the domains may be very different, but if you have a lot of data, maybe, um, it doesn't matter much. In practice, we have keen regions denoising with encoding by translation, multilingual training that we're pretty wide. And nowadays, the field is at the stage in which we are trying to figure out the best way to combine them. And right now there is a lot of what I would say, craftsmanship to figure out how to best combine them. And, um, but hopefully we can find- and I think there is a lot of effort in trying to automate this process because right now, um, there is a lot of cross-validation I would say to figure out all these hyperparameters. So the open challenges here is- are, dealing with a diversity of domain or domains, dealing with datasets that have very wildly different translation quality, some are very noisy, some are very clean, dealing with different- with datasets of different size and very different language pairs. And yeah. And so I would say that in general, it may be counter-intuitive, but working low-resource machine translation doesn't mean training small models on small data. But actually means training even bigger models on even more data because you need to compensate for the lack of supervision that you have, of direct supervision that you have. Very good. Before I go on, are there any questions? Um, yeah. I just had a quick question regarding, um- in the- in a few of the previous algorithms that you described, is it necessary to retrain entirely, um, like retrain the model entirely, or is there some way to augment the model or fine-tune it on the, um, on the generated- So actually what usually happens is that as you iterate, you can make the model bigger. So when you train on the parallel data set, usually this is not much data and so you need to train in something small. Otherwise, you over- overfit too much. But once you add, uh, the monolingual data that you- this, uh, A_t data set, then this model can be much bigger than the original model. Now, it's not super obvious how to, uh, you know, initialize a bigger model from a smaller model. [LAUGHTER] And so that's why people usually initialize from random. At the next iterations, you can, um, initialize from the model at the previous iteration. Uh, what we usually find is that initializing at random usually works as well. Gotcha. Thank you. Thank you. Okay. Any other question? When you say, uh, usually the model launch, ah, I was wondering do you mean that you add, ah, more layers with more parameters, ah, as the model keeps training? Uh, usually you just make it bigger. Yeah. Um, yeah, the more layers, more parameter. So whether it is wider or- or deeper, um, I think usually, um- yeah. I'm not entirely sure there is a definite answer on that. Um, usually making the encoder deeper is a good thing. [LAUGHTER] Making the decoder deeper doesn't buy you much. Um, so usually we play with the encoder, I would say. And, um- but yeah, uh, there is not so much difference in practice, uh, I would say. Um, yeah. So you can imagine just double the size of your hidden state, that would work. Okay? Okay. So let's see how this- so I- I didn't speak about models but I spoke about algorithms. So you can turn these algorithms into models a little bit and talk about joint distribution, marginal distribution. But in my view, it's just simpler to think in terms of algorithms because also the way that, uh, we implement them. And so let's see how these algorithms can be put together in some interesting case studies. So actually I realize that I'm really going slow. So let's see the case where you only have monolingual data and no parallel data. So this is what we call unsupervised machine translation. So let's say that you have an English and a French data set, uh, this is not a typical use-case of unsupervised machine translation but this is where it works really well. So let's focus for this for now. And so you take a sentence from the target monolingual data set, you go through your encoder-decoder, and you produce an English translation. Obviously, you don't have the reference here. So what you could do is to feed this to a machine translation system that goes from English to French so that you kind of reconstruct the original French sentence. And now you have- you can have another signal to back-propagate through your machine translation system. And you can do the same going from English to French to English. This is very much what people have done in vision. They call it, uh, cycle consistency. You can see this as an auto-encoder where the intermediate representation is uh, er, you know, er, is a language in- in English. The problem is that, the- as it is, the model is not constrained to produce something that is, uh, a fluent English sentence. So in the vision domain, people use adversarial training, but in NLP it's kinda tricky because this is a discrete sequence. And so in order to make sure that this decoder produces English- fluent English sentences, you could imagine to do denoising auto-encoding, all right? So you could take a no- you could take an English sentence, noisify it, go through your denoising auto-encoding. Now, this decoder is the same block that you have here, it's gonna be forced to learn the statistics and the regularities of the English language. The problem is that if you look at this decoder, this decoder is, in the denoising auto-encoding game, it is operating on the output of this encoder that takes English as input, while here, encoder takes French as input. It could be very well be the case that the representation produced by these two encoders is different. So this decoder may work very well in this setting, but not in this setting. And so in other ways, how can we make sure that these red and blue blocks are interchangeable? How can we make sure that there is good modularity? And so- so one way to do this is to use the trick that we use for multilingual training, whereby we have a single encoder and a single decoder. So the decoder is shared across French and English, and the encoder is shared across English and French. And we specify the target language by an extra token, ah, at the input. And so in particular, if you learn, ah, common VPs and if you share parameters, then this process- sorry, this process really works well and- and you have an- a decoder that operates well with an- ah, whenever it is fed with a hidden state that comes from an encoder operating on English or French. And so again, the ke- key ingredients are iterative backtranslation, denoising auto-encoding and multi-lingual training. And for unsupervised machine translation, we do, uh, back translation in an online manner whereby for a given mini-batch, we do, uh, back translation. We don't do it, uh, with the stale version on the model but you could do that, well, that works less well. And so when you do this on English-French, you find, uh- actually you can get pretty good performance. A BLEU of 30 usually gives you pretty fluent translations that are, uh, also adequate. And, uh, if you compare that to what you got with the, er, supervised baseline that is trained on the parallel data set, you find that training on 10 million monolingual sentences in English and 10 million French gives you the same, uh, translation accuracy than training a supervised baseline, that is this, um, uh, red curve- actually this red curve- this blue curve and this red curve. This is the neural version, this is the phrase-based version with 100,000 parallel sentences. So in other words, each parallel sentence pair is equivalent to 100 monolinguals sentences. Equivalent in the sense that they give you a machine translation system of similar, uh, um, accuracy. And so now the more the domains are different, and the more the languages are different from each other, the worse it gets. And so that's why when you do low-resource machine translation, this is the extreme case of unsupervised machine translation, you need to learn from lots of data in order to compensate for the lack of direct supervision. Um- I'm gonna, um, maybe give you, uh, an example on FloRes where for the FloRes, as we have seen there was no in-domain parallel data. There was some monolingual data that was in domain. But not very much, and there was quite a bit of auto- auto domain parallel data, you remember the one we used sentences from, um, Bible and Ubuntu. And then we have quite a bit of monolingual data that is auto domain. And so this is the supervised baseline. Unsupervised machine translation here didn't work at all because very much like was mentioning the- the Wikipedia domains are not quite, uh, aligned. And so this doesn't have unsupervised machine translation. If you do back translation, if you do iterative back-translation, you do quite a bit better than the supervised baseline, which is quite good. But now if you add those, so English-Hindi, parallel data, you do quite a bit better. And now also the unsupervised machine translation works. It's unsupervised for English- English-Nepali, but you do have supervision for English-Hindi. And so the combination of backtranslation and, um, multilingual training is here, the winning combination. And this is something that we see, uh, through in general. Okay? So I'm gonna skip the results on English-Burmese. Actually I had a nice demo. But I'm going to show it to you later if there is time. Um, and so, as I said, we have quite a few good components which we can combine pretty easily. Right now the research is about how to best combine them, how to best weigh datasets, how to best weigh the examples in order to automate the current cross-validation based process, I would say. And the other message here is that it's- low-resource machine translation is a big data problem. It- it requires big compute, it's a pretty big engineering feat. Uh, in order to, uh, compensate for the lack of parallel data. Are there any questions on this? I- I was just wondering when you mentioned that the parallel to, um, vision in cycle consistency, um, you mentioned that we can't do adversarial training- Yeah. -and I was just wondering if you could flesh that out and wh we couldn't just use say like an LSTM that performs adversarial training better. Yeah. Yes. So there are actually a bunch of papers trying to do, um, adversarial training or, uh, dense style training for, uh, text generation. I must say that it's a pretty active research area. I haven't seen a very compelling demonstration that these methods work very well with, we've tried. And it's a little difficult to backpropagate. So when this produces a sentence, you need to produce a, you know, a sentence and that's discrete. And so you could backpropagate using reinforce kind of methods. You could do a lot of these things but essentially- it's just a little hard to make it work and it's very finicky. So it may work on simple datasets, but at scale, it's very hard to. So another thing- another consideration is that anything that you do has to work at scale. Because again, the value- the amount of information that you get from a monolingual sentence is not very much. And now if you do a lot of compute, if you- or if your gradients have a lot of noise, uh, like when you train with reinforce, then, uh, it's not going to work. But it's possible that people may come up with ways to make it work. I don't think this is true at present, but it could be in the future. So let me spend five minutes on the analysis, um, and then you will have the slides so you can, uh, go over the remaining details. So here, uh, we- so the- the starting point is to say, well, if I want to simulate low-resource machine translation with a high resource language like French to English. Let's say you take EuroParl data, you have, let's say 20,000 parallel sentences and 100,000 monolingual target sentences, and you apply backtranslation, you get a very nice improvement. Now, if you come here to Facebook, [LAUGHTER] and- and you try this on Facebook data, you find that the improvement is actually very, very minimal. And that relates to the discussion that we had at the very beginning, that what people talk about in different parts of the world is very different. And so now you- you need- it's like you need to align two, uh, point clouds, but the distribution in these two point clouds is very, very different from each other, and so it's very difficult to align them. And so here I was making the example that even for English speaking countries, if you look at topics on- on sports, you have that- in America people may talk more about Football and Baseball while in the UK, more about Cricket and Soccer, right? And so for the same topic, you have different distribution words, but you also have a different distribution of topics. And so this is what we call the source target domain mismatch. So you may have several kinds of domain mismatch. Typically, you have a mismatch between the training distribution and the test distribution. Here I'm talking also about the mismatch between the source domain, the source language, the source domain, and the target domain. Okay? And so there is a hypothesis that this may make backtranslation less effective because even if you were to perfectly translate target side monolingual data, once you translate it, it's going to be out of domain, uh, with respect to the data that you really want to translate, which originates in the source domain. And so we had a very, uh, nice controlled setting, to study this problem. Um, where, uh, we create a syntactic dataset where the source domain comes from EuroParl data, and the target domain counts from OpenSubtitles, which are movie captions. And now, by creating the target domain as a mixture of the two, you can precisely control the amount of, uh, in-domainess between the source and the target domain. And by varying Alpha, you can vary that. And so the major result is this figure, where Alpha measures how much is the target domain, uh, similar to the source domain. So if Alpha is equal to 1, they are all, uh, in the same domain. If Alpha is equal to 0, they are very different. One is EuroParl and the other is OpenSubtitles. And so it turns out that in these extreme regime, actually set training, which is this red line, works better than backtranslation. But as you make the domains more and more similar, backtranslation is much better than set training. And both of them are much better than, um, just if you were to use the parallel data. Um, so I'm going to skip all of this. You can look at the paper, uh, and the slides. I want to conclude that there are other things that I didn't talk about, like filtering. This is one of the most exciting things nowadays, and the idea is to, uh, essentially learn a joint embedding space for sentences by simply training a multilingual system on lots of public available data, and then you use this in order to do nearest neighbor retrieval of a sentence for what the corresponding translation would be in other languages. And they found that- they collected a large data set and they were able to beat the performance of state of the art machine translation system on high-resource languages like English-German, English-Russian. And the idea is that by using much more data, although noisy, you can do better than using a curated, high-quality dataset, and this is something that we see over and over. And again, the idea here is that we need to figure out how to best combine backtranslation, this filtering, multilingual, and pretraining in order to, uh, uh, get, uh, the best combination ever for solving or for improving low-resource machine translation. And so I just want to- maybe I should conclude here, uh, by thanking my collaborators and by, uh, telling you that, um, uh, if you have any questions about this lecture, you can always e-mail me, drop me a line, I'd be happy to follow up. And also, in my lab, we have a lot of opportunities, from internships to full-time positions as a research scientist, research engineer. So if you're interested or are curious, just also, uh, drop me an e-mail. Okay. Thank you. Thanks a lot, Marc'Aurelio. Um, so- so maybe there's still a few people that might have questions. And we are happy to stay a few more minutes for questions. Happy to answer questions. Yes. Uh, I'd love to learn more about the models that you used. Uh, actually, should we- should we go back to the model that you first talked about- spoke about, right, before back-translation? Uh, in order to understand, uh, you have a pipeline from English to English, right? In this one, uh, you want something like, uh, you- the data augmentation techniques like in vision such as dropping the word or switching, uh, switching words to be able to make an augmented dataset, is that right? That's right. So the analogy that I made is, for back-translation were, yes, all these methods, essentially, you don't have x and y- golden x and y pairs, and so for set training, what you do, you, uh, fantasize the target. For back-translation, you fantasize the input. And so you can see all these methods as a way in particular, back-translation, is very similar to the data augmentation that people do in vision in the sense that here, the transformation is not, uh, rule-based, it's produced by a backward machine translation system, but it does the same objective of regularizing by adding a lot of noisy, uh, uh, labeled data. So if you go back to the previous slide, when you say you fantasize the- the target. So in this case, you have, uh, one where you- where you predict the goal target and one where you- where you change the input and then predict the target, is that how it is? Yeah. So in- for set training, the way that it works is that you take the clean input, you pass it through your machine translation system at the previous iteration. And, uh, you decode with beam or with other methods, and you got a prediction for what the label should be. And that- that's now your reference. But the way that you train your machine translation system is by noisifying the input. So you add noise to your input and the noise is you drop words, you swap words, and then you try to predict the, uh, target that you fantasize. And the idea is that the- that the two targets should be the same? Yeah. So the prediction and- and these targets, when you train with cross-entropy loss, you- you try to tie them together as much as possible. Okay. Thanks so much. I have a followup question, later. Yeah. This is a very- this is one of the first semi-supervised learning methods that, uh, you find in the machine learning community. There are a lot of variants of this where they have perhaps, er, uh, uh, a community of experts that produces the, uh- the label. Um, there are a lot of variants of this and, um, it's something that makes a lot of sense, particularly for asymmetric tasks. Like if you do, image speci- if you do text classification, if you do summarization, then back-translation is not really applicable because, uh, you know, if you go from a label category- from a categorical, uh, uh, input to a whole sentence, that's a very difficult task, right? So back-translation works really well for symmetric tasks like, uh, machine translation. But for, uh, things that are, uh- for many-to-one mapping, self-training is definitely- definitely works better. Uh, self-training works well also in machine translation when there is a lot of domain mismatch between the source and the target as we're seeing. Yeah. So unfortunately, these algorithms- so it's hard to say in general what works best because it really depends on the application, it really depends on the kind of data that you have. Dose anyone else have a question they'd like to ask? Well, it seems like we're maybe not getting another immediate question. And I guess we have gone through the end of the time, that we're, uh, meant to do. So maybe we should call it and bring it to a close, but thank you so much, Marc'Aurelio. I mean, I hope everyone really enjoyed that. And I, you know, speaking as someone who did work in machine translation for a decade, though I haven't so much for the last few years, I mean, you know, it actually still seems to me just amazing how successfully you can build things, um, with these, um, building with monolingual data and using ideas like the back-translation. I mean, it's just actually incredible that that's providing such competitive, um, machine translation systems now. And, you know, obviously, this is something that isn't just of academic interest, as you might have realized if you've thought about it, right? If you're at a company like Facebook, right? Being able- actually able to translate well data on domains that are very far from news data or the Bible [LAUGHTER] and in languages of smaller communities of speakers, it's just actually super-duper important, um, to people being happy users of and members of communities. Yeah. And- and I just want to add, uh, the kind of- these methods are pretty general, so we apply them to summarization, Q&A, uh, style transfer. So, you know, it's really beautiful that- it's a set of tools that you can use them in many places and it's all about, you know, in a way, um, aligning domains with little, uh, supervision or correspondences, right? So, yeah. Okay. Thank you very much. Thank you. Thank you. Bye. Bye-bye. Bye.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_3_Neural_Networks.txt
Okay. Hi everyone. Okay. Let's get started. Um- great to see you all here. Welcome back for um- week two of CS224N. Um- so- so this is a little preview of what's coming up in the class for this week and next week. Um- you know, this week is perhaps the worst week of this class. [LAUGHTER]. Um- so in week two of the class our hope is to actually kind of go through some of the nitty gritty of neural networks and how they're trained, and how we can learn good neural networks by backpropagation, which means in particular we're gonna be sort of talking about the training algorithms and doing calculus to work out gradients from proving them. Um, so we are looking a bi- a little bit, at- um- um, word window classification named entity recognition. So there's a teeny bit of natural language processing in there, but basically, sort of week two is sort of, um- math of deep learning and neural network models and sort of really neural network fundamentals. Um, but the hope is that that will give you kind of a good understanding of how these things really work, and we'll give you all the information you need to do, um- the coming up homework and so then, in week three we kind of flips. So, then week three is going to be mainly about natural language processing so we then gonna talk about how to put syntactic structures over sentences, um- for building dependency parses of sentences which is then actually what's used in homework three. So we're chugging along rapidly. And then we'll talk about this idea of the probability of a sentence which leads into neural language models. Um- so on the homeworks. Homework one was due approximately two minutes ago, um- so I hope everyone has submitted their homework one, I mean as, um- one just sort of admonition, um- in general so you know homework one we hope you found was a good warm up and not too too hard and so really be best to get homework one in quickly rather than to burn lots of your late days doing homework one. Um, and now right now out on the website, um there's homework two. Um so, we are chugging along. So homework two kind of corresponds to this week's lectures. So on the first part of that we are expecting you to grind through some math problems of working out gradient derivations. Um- and then the second part of that is then implementing your own version of word2vec making use of NumPy. And so this time sort of writing a Python program. It's no longer an IPython notebook. Um, I encourage you to get early, um- look at the materials, um- on the web. I mean, in particular corresponding to today's lecture there's, um- some quite good tutorial materials that are available on the website and so also encourage you to look at those. [NOISE]. Um- more generally, just to make a couple more comments on things. I mean, I guess this is true of a lot of classes at Stanford but, you know when we get the course reviews for this class we always get the full spectrum from people who say the class is terrible and it's way too much work, um- to the people who say it's a really great class, one of their favorite classes at Stanford, obvious the instructors care, et cetera. And I mean, partly this reflects that we get this very, um- wide range of people coming to take this class on the one hand, on the right hand margin perhaps we have the physics PhDs, and on the left hand margin we have some fresh who think this will be fun to do anyway. Um, we welcome e- we welcome everybody, um- but in principle this is uh, graduate level class. You know, that doesn't mean we want to fail people out, we'd like everyone to succeed but also like graduate level class. Um- we'd like you to- you know, take some initiative in your success. Meaning, if there are things that you need to know to do the assignments and you don't know them, um- then you should be taking some initiative to find some tutorials, come to office hours and talk to people and get any help you need and learn to sort of for any holes in your knowledge. Okay. So here's the plan for today. Um- so that was the course information update. So you know, is- this is sort of, in some sense you know machine learning neural nets intro- Just to try and make sure everyone else is up to speed on all of this stuff. So I'll talk a little bit about classification, um, introduce neural networks, um, little detour into named Entity Recognition, then sort of show a model of doing um Window- Word Window classification and then the end part, we sort of then dive deeper into what kind of tools we need to learn neural networks and so today um we're gonna go through um somewhere between review and primer of matrix calculus and then that will lead into next time's lecture where it's talking more about backpropagation and computation graphs. So, yeah. So this material was especially the part at the end. You know for some people it'll seem really babyish if- it's the kind of stuff you do every week, um, for other people it um- might seem impossibly difficult but hopefully for a large percentage of you in the middle this will be kind of a useful review of doing this kind of matrix calculus and the kind of things that we hope that you can do on homework two. Um, okay. So um, yeah. So sorry if I'm boring some people. If you sat through 229 last quarter you saw um what a classifier was like and hopefully this will seem familiar but I'm just sort of hoping to try and have everyone in week two sort of up to speed and on roughly the same page. So here's our classification setup. So we have assumed we have a- training data set where we have these um vector x um of our x points and then for each one of them we have a class. So the input might be words or sentences documents or something, there are d to mention vector, um, the Yi, the labels or classes that we want to classify to and we've got a set of C classes that we're trying to predict. And so those might be something like the topic of the document, the sentiment positive or negative um of a document or later we'll look a bit more at named entities. Okay. So if we have that um- for this sort of intuition is we got this vector space which we again have a 2D picture and we have points in that vector space which correspond to Rx items and what we'd want to do is we'll look at the ones in our training sample and see which ones are green and red for our two classes here and then we want to sort of learn a line that could divide between the green and the red ones as best as possible and that learned line is our classifier. So on traditional machine learning or statistics we have the sort of XI vectors that are data items that are purely fixed but we're going to then multiply those XI by some estimated weight vector and that estimated weight vector will then go into a classification decision. And the classifier that I'm showing here is a softmax classifier which is almost identical but not quite to logistic regression classifier which you should've seen in CS 109 or a stats class or something like that which is giving a probability of different classes. Okay. And in particular if you've got a softmax classifier or a logistic- logistic regression classifier, these are what are called linear classifiers. So the decision boundary between two classes here is a line in some suitably high-dimensional space. So it's a plane or a hyperplane once you've got a bigger expecter. Okay. So here's our softmax classifier. Um, and there are sort of two parts to that. So in the- in the weight matrix double U we have a row corresponding to each class and then for that row we're sort of dot-producting it with our data point vector XI and that's giving us a kind of a score for how likely it is that the example belongs to that class and then we're running that through a softmax function and just as we saw on week one, the softmax takes a bunch of numbers and turn them into a probability distribution. Does that makes sense to people? People remember that from last week? Good so far? Okay. Um, I'm not gonna go to this in detail but I mean, ah- essentially this is what the logistic regression does as well. Um, the difference is that here in this setup we have a weight vector um for each class whereas what the statisticians doing logistic regression is they say weight, that gives us one more number of weight vectors than we really need. We can get away for- for C classes, we can get away with C minus one weight vectors. So in particular if you're doing binary logistic regression you only need one weight vector whereas this softmax regression formulation you've actually got two weight vectors one for each class. Um, so there's that sort of a little difference there which we could get into but basically the same. It's just say it's we're either doing softmax or logistic regression, doesn't matter. Um, so when we're training what we want to do is we want to be able to predict um the correct class. And so the way we're gonna do that is we're gonna wanna train our model so it gives us highest probability as possible to the correct class and therefore they'll give us low probability po- as possible um to um the wrong classes. And so our criterion for doing that is we're going to create this negative log probability um of our assignments and then we're gonna want to minimize the negative log probability which corresponds to maximizing the log probability which corresponds to maximizing um the probability. Um. And, but, um, sort of, pretty soon now, we're gonna start doing more stuff with deep learning frameworks, in particular PyTorch and you can discover in that, that there's actually a thing called NLL loss which stands for negative log-likelihood loss. Basically, no one uses it because the more convenient thing to use is what's called the cross entropy loss and so you'll hear everywhere that we're training with cross entropy loss. So, I just wanted to briefly mention that and explain what's going on there. Um, so the concept of cross entropy comes from baby Information Theory which is about the amount of information theory I know. Um, so, we're assuming that there's some true probability distribution P and our model, we've built some probability distribution, Q. That's what we've built with our soft-max regression and we want to have a measure of whether our estimated probability distribution is a good one. And the way we do it in cross entropy is, we go through the classes and we say, "what's the probability of the class according to the true model?" Using that waiting, we then work out the log of, um, the probability according to our estimated model and we sum those up and negate it, and that is our cross entropy measure. Okay. Um, but- so this in general gives you a measure of sort of information, um, between distributions. But in our particular case, remember that for each example, we've sort of assuming that this is a piece of labeled training data so we are saying for that example, the right answer is class seven. So therefore, our true distribution, our p is- for this example, it's class seven with probability one and it's class, um, anything else with probability zero. So if you think about then what happens with this formula, you've got this summation of all the classes. The PFC is gonna be either one or zero and it's gonna be one only for the true class here and so what you're left with is, this is going to equal minus the log of qc, um, for the true class which is sort of what we were then computing in the previous slide. Okay. So that's- um, yeah. So that's basically where you'd get with cross entropy loss. Um, but one other concept to mention. So when you have a full data-set of a whole bunch of examples, the cross entropy loss is then taking the per example average. So, I guess it's what information theory people sometimes call the cross entropy rate. So additionally, factored in there. If you are training it on any examples is that one on in vector that's coming in there. Okay. Um, okay. Um, so that's cross entropy loss. Is that okay? Yeah. [NOISE] There's some- there's some mixture of the actual labels in the ground? Sure. Good question. Right. So, the simplest case is that your gold data, someone has hand labeled it and, um, they've labeled one and the rest is zero. Um, they are- you can think of cases where that isn't the case. I mean, one case is you could believe that human beings sometimes don't know the right answer so if human beings said, "I'm not sure whether this should be class three or four," you could imagine that we can make training data where we put probability half on both of them, um, and that wouldn't be a crazy thing to do, and so then you'd have a true cross entropy loss using more of a distribution. Um, the case where it's much more commonly used in actual practice is, there are many circumstances in which people wanna do semi-supervised learning. So, I guess this is a topic that both my group and Chris Re's group have worked on quite a lot, where we don't actually have fully labeled data, but we've got some means of guessing what the labels of the data are and if we try and guess labels of data, well then quite often we'll say, "Here's this data right in. It's two-thirds chances this label, but it could be these other four labels," and we'd use a probability distribution, and yeah, then it's more general cross entropy loss. Okay? Um, right. So, um, that's cross entropy loss, pretty good with. Um, this bottom bit is a little bit different, um, which is to say, "Well now we, this is the sort of the full data-set." The other thing to notice, um, when we have a full data- we can have a full data-set of x's, um, and then we have a full set of weights. Um, where here we're working a row, a row vector for the weights for one class, but we're gonna work it out for all classes. So, we can sort of simplify what we're writing here and we can start using matrix notation and just work directly in terms of the matrix w. Okay. So for traditional ML optimization, our parameters are these sets of weights, um, for the different classes. So for each of the classes, we have a d-dimensional, um, row vector of weights because we're gonna sort of dot-product wi- with rd, dimensional, input vector. So we have c times d items and our W matrix and those are the parameters of our model. So if we want to learn that model using the ideas of gradient descent, stochastic gradient descent, we're gonna do sort of what we started to talk about last time. We have these set of parameters. We work out, um, the gradient, the partial derivatives of all of these, um, of the loss with respect to all of these parameters and we use that to get a gradient update on our loss function, and we move around the w's, and moving around the w's corresponds to sort of moving this line that separates between the classes and we fiddle that around so as to minimize our loss which corresponds to choosing a line that best separates between the items of the classes in some sense. Okay. So, that's a basic classifier. So the first question is, well, how are things gonna be different with a neural network classifier? Um, so the central observation is that sort of most of the classic classifiers that people used a lot of the time, so that includes things like Naive Bayes models, um, basic support vector machines, Softmax or logistic regressions. They're sort of fairly simple classifiers. In particular those are all linear classifiers which are going to classified by drawing a line or in the higher dimensional space by drawing some kind of plane that separates examples. Having a simple classifier like that can be useful in certain circumstances. I mean, that gives you what a machine learning as a high bias classifiers, there's lots of, talk of in CS229, but if you have a data-set, um, that's like this, you can't do a very good job at classifying all the points correctly if you have a high bias classifier because you're gonna only draw a line. So you'd like to have a more powerful classifier. Essentially, what's been powering a lot of the use of deep learning is that in a lot of cases when you have natural signals, so those are things like, um, speech, language, images, and things like that, you have a ton of data so you could learn a quite sophisticated classifier. Um, but representing the classes in terms of the input data is sort of very complex. You could never do it by just drawing a line between the two classes. So, you'd like to use some more complicated kind of classifier. So neural networks, the multi-layer neural networks that we're gonna be starting to get into now, precisely what they do is provide you a way to learn very complex, you know, almost limitlessly complex classifiers. So that if you look at the decisions that they're making in terms of the original space, they can be learning cases like this. Um, I put this- I put the, um, pointer on a couple of the slides here. Um, this- this is a visualization that was done by Andrei Karpathy. He was a PhD student here until a couple of years ago. So this is a little JavaScript, um, app that you can find off his website and it's actually a lot of fun to play with to see what kind of, um, decision boundaries you can get a neural net to come up with. Okay. Um, so for getting- for getting more advanced classification out of, um, a neural net used for natural language, there are sort of two things going- that you can do, that I want to talk about which are in some sense the same thing when it comes down to it. But I'll sort of mention separately at the beginning that one of them is that we have these word vectors and then the second one is that we're gonna build deeper multi-layer networks. Okay. So, at first crucial difference said, um, we already started to see, um, with what we were doing last week is rather than sort of having a word being this is the word house, we instead say house is a vector of real numbers and what we can do is change the vector that corresponds to house in such a way as we can build better classifiers, which means that we are gonna be sort of moving houses representation around the space to capture things that we're interested in like word similarity, analogies, and things like that. So this is actually, you know, kind of a weird idea compared to conventional steps or ML. So rather than saying we just have the parameters w, we also say that all of these word representations are also parameters of our model. So, we're actually going to change the representations of words to allow our classifiers to do better. So, we're simultaneously changing the weights and we're changing the representation of words, and we're optimizing both of them at once to try and make our model as, um, good as possible. So, this is the sense in which people often talk about the deep learning models that we're doing representation learning. I sort of said there are two ways, I was going to mention two things. One is this sort of, um, word vector representation learning and then the second one is that we're going to start looking at deeper multi layer neural networks. Um, sort of hidden over here on the slide is the observation that really you can think of word, word vector embedding as just putting your, having a model with one more neural network layer. So, if you imagine that each word was a one hot vector, um, with, for the different word types in your model. So, you had a, uh, you know, 150,000 dimensional vector with the one-hot encoding of different words. Um, then you could say you have a ma-, um, matrix L which is sort of your lexicon matrix and you will pass your one-hot vector for a word through a layer of neural net which multiplies the one-hot vector or L1, the one-hot vector. And since this was a one-hot vector, what that will have the effect of doing is taking out a column of L. So, really, we've got an extra layer of matrix, um, in our neural net and we're learning the parameters of that matrix in the same way as we're learning, um, a deep neural network for other purposes. So, mathematically that completely makes sense and that's sort of a sensible way to think about, um, what you're doing, um, with word embeddings in neural networks. Um, implementation wise, this makes no sense at all and no one does this because it just doesn't make sense to do a matrix multiply when the result of the matrix multiply will be, okay. This is word ID 17, um, sort of, then constructing a one-hot vector of length a 150,000 with a one in position 17 and then doing a matrix multiplied, makes no sense. You just take up, um, column or, or, the row, as we've discussed, 17 of your matrix and that's what everyone actually does. Okay. Here's my one obligatory picture of neurons, um, for the class. So, don't miss it, I'm not going to show it again, all class. Okay. So, the origins [LAUGHTER] of Neural Networks, um, was in some sense to try and construct an artificial neuron that seemed to in some sense kind of capture the kind of computations, um, that go on in human brains. It's a very loose analogy for what was produced but, you know, our model here is these are our, this is our a TB part of our human brains. So, here are neurons, this is a neuron cell here and so, what does a neuron consist of. Um, so, up the back, it's got these dendrites, lots of dendrites. Then it's got a cell body and if there's stuff coming in on the dendrites, um, the cell body will become active and then it all starts spiking down this long thing which is called the Axon. So, then these axons lead to the dendrites of a different cell or lots of different cells, right. This one, um, I'm not sure it's shown but some of these are kind of going to different cells. Um, and so, you then have these sort of, um, terminal buttons on the Axon which are kind of close to the dendrites but have a little gap in them and some min-, miracles of biochemistry happen there. So, that's the synapse, of course, which you'll then have sort of activation flowing which goes into the next neuron. So, that was the starting off, um, model that people wanted to try and simulate in computation. So, people came up with this model of an artificial neuron. So, that we have things coming in from other neurons at some level of activations. So, that's a number X0, X1, X2. Um, then synapses vary depending on how excitable they are as to how easily they'll let signal cross across the synapse. So, that's being modeled by multiplying them by a weight W0, W1, W2. Then the cell body, sort of correctly, is sort of summing this amount of excitation it's getting from the different dendrites, um, and then it can have its own biases to how likely it is to fire, that's the B. Um, so, we get that and then it has some overall kind of threshold or propensity for firing. So, we sort of stick it through an activation function, um, which will sort of, will determine a firing rate and that will be, um, the signal that's going out on the output axon. So, that was sort of the starting point of that but, you know, really, um, for what we've ended up computing. We just have a little bit of baby math here which actually, um, looks very familiar to the kind of baby math you see in linear algebra and statistics and so it's really no different. So, in particular, um, a neuron can very easily be a Binary Logistic Regression Unit. Um, so that, this is sort of, for logistic regression you're taking for your input X, you multiply it by a weight vector. You're adding, um, your, um, bias term and then you're putting it through, um, a non linearity, like the logistic function. Um, and then, so you're calculating a logistic regression, um, inside this sort of neuron model. Um, and so this is the, this is the difference between the soft maximum logistic regression, that I was saying that there is the soft-max for two classes has two sets of parameters. This sort of just has one set of parameters Z and your modeling the two classes by giving the probability of one class from 0 to one, depending on whether the input to logistic regression is highly negative or highly positive. Okay. So, really, we can just say these artificial neurons are sort of like binary logistic regression units or we can make variants of binary logistic regression units by using some different F function. And we'll come back to that again and pretty soon. Okay. Um, well, so that gives us one neuron. So, one neuron is a logistic regression unit for current purposes. So, crucially what we're wanting to do with neural networks is say, well, why only run one logistic regression, why don't we, um, run a whole bunch of logistic regressions at the same time? So, you know, here are our inputs and here's our little logistic regression unit, um, but we could run three logistic regressions at the same time or we can run any number of them. Um, well, that's good but sort of for conventional training of a statistical model which sort of have to determine for those orange outputs of the logistic regression. You know, what we're training each of them to try and capture. We have to have data to predict what they're going to try and capture. And so, the secret of sort of then building bigger neural networks is to say, we don't actually want to decide ahead of time what those little orange logistic regressions are trying to capture. We want the neural network to self-organize, so that those orange logistic regression, um, units learn something useful. And well, what is something useful? Well, our idea is to say, we do actually have some tasks that we want to do. So, we- we have some tasks that we want to do. So maybe, we want to sort of decide whether a movie review is positive or negative, something like sentiment analysis or something like that. There is something we want to do at the end of the day. Um, and we're gonna have, uh, logistic regression classifier there telling us positive or negative. Um, but the inputs to that aren't going to directly be something like words in the document. They're going to be this intermediate layer of logistic regression units and we're gonna train this whole thing to minimize our cross entropy loss. Essentially, what we're going to want to have happen in the back propagation algorithm will do for us, is to say, you things in the middle, it's your job to find some useful way to calculate values from the underlying data such that it'll help our final classifier make a good decision. I mean in particular, you know, back to this picture, you know. The final classifier, its just a linear classifier, a soft-max or logistic regression. It's gonna have a line like this. But if the intermediate classifiers, they are like a word embedding, they can kind of sort of re-represent the space and shift things around. So, they can learn to shift things around in such a way as you're learning a highly non-linear function of the original input space. Okay. Um, and so at that point, it's simply a matter of saying, well, why stop there? Maybe it gets even better if we put in more layers. And this sort of gets us into the area of deep learning and sort of precisely, um, this is, um, that sort of there was- sort of being three comings of neural networks. So the first work in the 50s which is essentially when people had a model of a single neuron like this and then only gradually worked out how it related to more conventional statistics than there was. Um, the second version of neural networks which we saw the 80s and early 90s, um, where people, um, built neural networks like this that had this one hidden layer where a representation could be learned in the middle. But at that time it really wasn't effective. Of all people weren't able to build deeper networks and get them to do anything useful. So you sort of had these neural networks with one hidden layer and so precisely with research that started in- into deep learning that precisely the motivating question is, um, we believe we'll be able to do even more sophisticated, um, classification for more complex tasks. Things like speech recognition and image recognition if we could have a deeper network which will be able to more effectively learn more sophisticated functions of the input which will allow us to do things like recognize sounds of a language. How could we possibly train such a, um, network so they'll work effectively? And that's the kind of thing, um, will go on to, um, more so starting this lecture more so in the next lecture. But before we get to there, um, just to underline it again. So once we have something like this is our, um, layer of a neural network. We have a vector of inputs, we have a vector of outputs and everything is connected so that we've got this sort of weights along every one of these black lines. And so we can say A1 is you're taking weights times each component of X1 and adding a bias term, um, and then you're going to be running which is sort of this part and then running it through our non-linearity and that will give us an output. And we're gonna do that for each of A1, A2, and A3. Um, so again, we can kind of regard A is a vector and we can kind of collapse it into this matrix notation for working out the effects of layers. The fully connected layers are effectively matrices of weights, um, and commonly rewrite them like this where we have a bias term as a vector of bias terms. There's sort of a choice there. You can either have an always on import and then the bias terms become part of the weights of a slightly bigger matrix with one extra, uh, one extra either column or row. One extra, a- row, right? Or you can just sort of have them separately within those Bs. Okay. Um, and then the final note here- right? So once we've calculated this part, we always put things through non-linearity which is referred to as the activation function and so something like the logistic transform I showed earlier is an activation function. And this is written as sort of vector in port, um, activation function giving a vector output, and what this always means is that we apply this function element-wise. So we're applying the logistic function which is sort of a naturally a one input one output function like the little graph I showed before. So when we apply that to a vector, we apply it to each element of the vector element-wise. Okay. We will come back very soon to sort of saying more about non-linearities and what non-linearities people actually use. Um, but, you know, something you might be wondering is well, why does he always have these non-linearities and say there has to be an f function there? Why don't we just, um, calculate Z equals WX plus B in one layer and then go on to another layer that also does Z2 equals W2, Z1 plus B and keep on going with layers like that? And there's a very precise reason for that which is if you want to have a neural network learn anything interesting, you have to stick in some function F which is a non-linear function such as the logistic curve I showed before. And the reason for that is that if you're sort of doing linear transforms like WX plus B and then W2 Z1 plus B, W3Z2 plus B and you're doing a sequence of linear transforms. Well, multiple linear transforms just composed to become a linear transform, right? So one linear transform is rotating and stretching the space somehow and you can rotate and stretch the space again but the result of that is just one bigger rotate and stretch of the space. So you don't get any extra power for a classifier by simply having multiple linear transforms. But as soon as you stick in almost any kind of non-linearity, then you get additional power. And so you know in general, what we're doing when we're doing deep networks, um, in the middle of them we're not thinking, "Ah, it's really important to have non-linearity thinking about probabilities or something like that." Our general picture is well, we want to be able to do effective function approximation or curve fitting. We'd like to learn a space like this and we can only do that if we're sort of putting in some non-linearities which allow us to learn these kind of curvy decision, um, patterns. And so- so F is used effectively for doing accurate [NOISE] fu- function approximation or sort of pattern matching as you go along. Okay. You are behind already. Um, okay. So that was the intro to baby neural networks. All good? Any questions? Yes? Yeah, like er, feature one and feature four if- if you multiply it together it's highly indicative of like the label Y, can you get to that product relationship to just say [NOISE] couple of layers that are linear? Um, yes. Good question. So, in conventional stats, you have your basic input features and when people are building something like a logistic regression model by hand, people often say well, something that's really important for classification is looking at the pair of feature four and feature seven. Um, that you know, if both of those are true at the same time something i-important happens and so that's referred to normally in stats as an interaction term, and you can by hand a-add interaction terms to your model. So, essentially a large part of the secret here is having these intermediate layers. They can learn, build interaction terms by themselves. Yeah, so it's sort of, um, automating the search for higher-order terms that you wanna put into your model. Okay. I'll go on, other questions? Okay. Um, so um, yeah. So here's a brief little interlude on a teeny bit more of NLP which is sort of a kind of problem we're gonna to look at for a moment. So this is the task of named entity recognition that I very briefly mentioned last time. So, um, if we have some text, wait, it isn't appearing here. Okay. Uh, okay. If we have some text, something that in all sorts of places people want to do is I'd like to find the names of things that are mentioned. Um and then normally, as well as, finding the names of things you'd actually like to classify them, say it's like to say some of them are organizations, some of them are people, um, some of them are places. And so you know this has lots of uses, you know, people like to track mentions of companies and people and newspapers and things like that. Um, people when they do question-answering that a lot of the time the answers to questions are what we call named entities the names of people, locations, organizations, pop songs, movie names all of those kind of things are named entities. Um, and if you want to sort of start building up a knowledge base automatically from a lot of text, well, what you normally wanna do is get out the named entities and get out relations between them. So this is a common task. So, how can we go about doing that? And a common way of doing that is to say well, we're going to go through the words one at a time and they're gonna be words that are in a context just like they were for word to deck, and what we're gonna do is run a classifier and we're going to assign them a class. So we're gonna say first word is organization, second word is organization, third word isn't a named entity, fourth word is a person, fifth word is a person and continue down. So in running a classification of a word within a position in the text so it's got surrounding words around it. Um and so to say what the entities are many entities are multi-word terms and so the simplest thing you can imagine doing is just say we'll take the sequence that are all classified the same and call that the e-entity Shen Guofang or something like that. There's a reason why that's slightly defective and so what people often use is that BIO encoding, um, that I show on the right but I'll just gonna run ahead and not do that now. Um so, it might seem at first that named entity recognition is trivial because you know, you have company names Google and Facebook are company names. And whenever you see Google or Facebook you just say company and how could you be wrong? But in practice, there's a lot of subtlety and it's easy to be wrong in named entity recognition. So this is sort of just some of the hard cases. So it's often hard to work out the boundaries of an entity. So in this sentence, First National Bank don-donates two vans to Future School of Fort Smith. So, there's presumably the name of a bank there but is it National Bank and the first is just the first word of a sentence which is cap-capitalized like first she ordered some food or something. So kind of unclear what it is. Sometimes it's hard to know whether something's an entity at all. So at the end of this sentence is Future School the name of some exciting kind of 21st-century school or is it just meaning it's a future school that's gonna be built in this town, right? Is it an entity or not at all? Working out the class of an entity is often difficult so to find out more about Zig Ziglar and read features by what class is Zig Ziglar? Kinda hard to tell if you don't know. Um, it's actually a person's name, um, and there are various entities that are ambiguous, right? So Charles Schwab in text is 90% of the time an organization name because there's Charles Schwab Brokerage. Um, but in this particular sentence here, in Woodside where Larry Ellison and Charles Schwab can live discreetly among wooded estates, that is then a reference to Charles Schwab the person. So there's sort of a fair bit of understanding variously that's needed to get it right. Okay. Um, so what are we gonna do with that? And so this suggests, um, what we wanna do is build classifiers for language that work inside a context. Um, so you know, in general, it's not very interesting classifying a word outside a context we don't actually do that much in NLP. Um, but once you're in a context, um, then it's interesting to do and named entity recognition is one case there are lots of other places that comes up. I mean, here's a slightly cool one, that there are some words that can mean themselves and their opposite at the same time, right? So to sanction something can either mean to allow something or it can mean to punish people who do things or to seed something can either mean to plant seeds and things that you're seeding the soil or it can take seeds out of something like a watermelon, right? You just need to know the context as to which it is. Okay. So, that suggests the tasks that we can classify a word in its context of neighboring words and any has an example of that. And the question is how might we do that? And a very simple way to do it might be to say, "Well, we have a bunch of words in a row which each have a word vector from something like word to vec. Um, maybe we could just average those word vectors and then classify the resulting vector. The problem is that doesn't work very well because you lose position information. You don't actually know anymore which of those word vectors is the one that you're meant to be classifying. So, a simple way to do better than that is to say, "Well, why didn't we make a big vector of a word window?" So, here are words and they each have a word vector, and so to classify the middle word in the context of here plus or minus two words, we're simply going to concatenate these five vectors together and say now we have a bigger vector and let's build a classifier over that vector. So, we're classifying this x window which is then a vector in, ah, 5D if we're using D dimensional word vectors. We can do that um in the kind of way that we did previously which is, um, that we could say, "Okay, for that big vector we're going to learn w weights and we're put- gonna put it through a softmax classifier, and then we're going to do the decisions." Um, that's a perfectly good way to do things and, um, for the purpose of it. What I want to get to in the last part of this is to start looking at my, um, matrix calculus. And you know we could use this model and do a classifier and learn the weights of it and indeed, um, for the handout on the website that we suggest you look at it does do it with a softmax classifier of precisely this kind. Um, but for the example I do in class I try to make it a bit simpler. Um, and I've wanted to do this I think very quickly because I'm fast running out of time. So, one of the famous early papers of neural NLP, um, was this paper by Collobert and Weston which was first an ICML paper in 2008 which actually just a couple of weeks ago, um, won the ICML 2018 test of time award. Um, and then there's a more recent journal version of it 2011. And um, they use this idea of window classification to assign classes like named entities, ti- to words in context, um, but they did it in a slightly different way. So, what they said is, "Well, we've got these windows and this is one with the, um, location named entity in the middle and this is one without a location entity in the middle. So, what we want to do is have a system that returns a score, and it should return a high score just as a real number in this case and it can should return a low score if it- if there isn't, ah, location name in the middle of the window in this case. So, explicitly the model just return the score. So, if you had the top level of your neural network a, and you just then dot product did with a vector u, you then kind of with that final dot product, you just return a real number. They use that as the basis of their classifier. So in full glory, what you had is you had this window of words, you looked up a word vector for each word, you then, um, multiplied that the, the- well you concatenated the word vectors for the window. You multiplied them by a matrix and edited a bias to get a second hidden layer which is a and then you multiply that by a final vector and that gave you a score for the window and you wanted the score to be large if it was the location and small, if it wasn't a location. So, in this sort of pretend example where we have four dimensional word vectors, um, that's meaning you know for the window, this is a 20 x 1 vector. Um, for calculating the next hidden layer we've got an 8 by 20 matrix plus the bias vector. Then, we've got this sort of 8-dimensional second hidden layer and then we are computing a final real number. Okay. Um, and so crucially this is an example of what the question was about. Um, we've put in this extra layer here, right? We could have just said here's a word vector, a big word vector of, of context. Let's just stick a softmax or logistic classification on top to say yes or no for location. But by putting in that extra hidden layer precisely this extra hidden layer can calculate non-linear interactions between the input word vectors. So, it can calculate things like if the first word is a word like museum and the second and the second was a word like the preposition in or around then that's a very good signal that this should be, ah, location in the middle position of the window. So, extra layers of a neural network let us calculate these kind of interaction terms between our basic features. Okay. Um, so there's a few more slides here that sort of go through the details of their model, but I'm gonna just skip those for now because I'm a little bit behind. And at the end of it we've just got this score. So this is our model which is the one that I just outlined where we're calculating the score and we're wanting a big score, um, for location. And so, what we're gonna want to do is consider, um, how we can use this model, um, to learn, um, our parameters in a neural network. Um, so in particular, remember it's the same story we've had before. We had a loss function J, and we're wanting to work out, um, the gradient with respect to our current theta parameters of the loss function. Then, we want to sort of subtract a little multiple of that, um, given by the learning rate from our current parameters to get updated parameters, and if we repeatedly do then stochastic gradient descent we'll have better and better parameters which give higher probability to the things that we're actually observing in our training data. So, the thing we want to know is, well, in general how can we do this um, differentiation and work out the gradient of our loss function? And so, I sort of wanted to sort of this the remaining time in this lecture, um, go through how we can do that by hand, um, using math and then that'll lead into sort of discussing and more generally the backpropagation algorithm, um, for the next one. Okay. So, if we're doing um, gradients by hand well we're doing multi-variable calculus, multi-variable derivatives. But in particular normally the most useful way to think about this is as doing matrix calculus which means we're directly working with vectors and matrices to work out our gradients, and that that's normally sort of much faster and more convenient for summarizing our neural network layers than trying to do it in a non vectorized way. But that doesn't mean that's the only way to do it. If you're sort of confused about what's going on, sometimes thinking it through in the non vectorized way can be a better way to understand what's going on and, um, make more progress. So, like when, um, last time I did the word2vec um derivatives when I was writing too small on that board, sorry, um, that was doing it in a non vectorized way of working out the weights, talking about them individually. Um, but here we're going to do it with, um, vectors and matrices. And again, look for the lecture notes to cover this material in more detail. In particular, so that no one misses it. Um, let me just clarify what I mean by lecture notes. So, if you look at the course syllabus on the left-hand column, um, there's the slides that you can download and, on straight under the slides, it says lecture notes. That's what I'm meaning by the lecture notes. In the- in the middle column it then has some readings and actually there are some diffe- additional things there that cover similar material. Um, so there's, um, so there's they might be helpful as well. But first the thing that's closest to what I'm about to present, it's the lecture notes that appear immediately under the slides link. Okay. Um, so my hope here, um, my hope here is the following: Um, if you can't remembered how to do single variable calculus, sorry you're basically sunken and might as well leave now. Um, [LAUGHTER] I'm assuming you know how to do single-variable calculus and I'm assuming you know what a um a vector and a matrix is. Um, but you know, um, I sort of hope that even if you never did multi-variable calculus or you can't remember any of it, it's sort of for what we have to do here, not that hard and you can do it. So, here's what, um, what you do. Um, all right. So, if we have a simple function f of x equals x cubed, right. Its gradient, um, and so the gradient is the slope, right? Saying how steep or shallow is the slope of something, and then when we and also saw the direction of slope when we go into multiple dimensions. Um, its gradient is just as derivatives. So, its derivative is 3x squared. Um, so if you're at the point x equals 3, that you know, the sort of this 27 of sloppiness, um, is very steep. Okay. So well, what if we have a function with one output but now it has many inputs? Um, so that we're sort of doing that sort of, um, function that was like the dot products where we're doing the sort of the UTV or WTX, um, to calculate a value. Well, then what we're gonna calculate is a gradient which is a vector of partial derivatives with respect to each input. So, you take, um, the slope of the function as you change x1, the slope of the function as you change x2 through the slope of the, ah, function as you change xn and each of these you can just calculate as if you were doing single variable calculus and you just put them all in a vector and that's then giving you the gradient and then the gradient and multi-dimensional, um, spaces then giving you the direction and slope of a sort of a surface that touches your multi-dimensional, um, f function. Okay. So that's getting a bit scarier, but it gets a little bit scarier than that because if we have a neutral network layer, um, we then have a function which will have n inputs, which are the input neurons, and it will have m outputs. So if that's the case, um, you then have a matrix of partial derivatives which is referred to as the Jacobian. So in the Jacobian, um, you're sort of taking these partial derivatives, um, with respect to each, um, output along the rows and with respect to each input down the columns. And so you're getting these m by n partial derivatives, considering every combination of an output and an input. Um, but again, you can fill in every cell of this matrix just by doing single-variable calculus provided you don't get yourself confused. Okay. Um, then we already saw when we were doing word2vec, that sort of a central tool that we have to use to work out, um, to work out, um, our derivatives of something like a neural network model is we have a sequence of functions that we run up one after another. So, um, in a neural network you're sort of running a sequence of functions one after another. So we have to use, um, the chain rule to work out derivatives when we compose functions. So if we have one variable function, so we have, um, C equals 3y and y equals x squared. If we want to work out, um, the derivative of z with respect to x, we say, aha, that's a composition of two functions. So I use the chain rule. And so that means what I do is I multiply, um, the derivative. So I take, um, dz/dy. So that's 2x, um, wait, [NOISE] Sorry, I said that wrong, right? Is my example wrong? Oh yeah, its right, dz/dy. So yeah, dz/dy is just three. That's, right, that's the derivative of the top line, and then dy/dx is 2x. And I multiply those together and I get the answer, um, that the derivative of z with respect to x is 6x. Okay. Um, this bit then gets a little bit freakier, but it's true. If you have lots of variables at once, you simply multiply the Jacobians and you get the right answer. So if we're now imagining our neural net, well sort of, this is our typical neural net right? So we're doing the neural net layer where we have our weight matrix multiplied their input vector plus, um, the bias, and then we're putting it through a non-linearity. And then if we want to know what's the partials of h with respect to x, we just say, huh, it's a function composition. So this is easy to do. We work out our first Jacobian, which is the partials of h with respect to z, and then we just multiply it by the partials of z with respect to x, and we get the right answer. Um, easy. Um, so here's sort of um an example Jacobian which is a special case that comes up a lot. Um, so it's just good to realize this one which we'll see with our neural net. So well one of the things that we have are these element-wise activation function. So we have h equals f of z. So, um, what is the, um, partial derivative of h with respect to z. Um, well the thing- remember that we sort of apply this element-wise. So we're actually saying hi equals f of zi. So, you know, formally this function has n inputs and n outputs, so it's partial derivatives are going to be an n by n Jacobian. But if we think about what's happening there, um, what we're actually going to find is, sort of, when we're working out the terms of this so we're working out, how does f of zi change as you change zj? Well, if j is not equal to i, it's gonna make no difference at all, right? So if my f function is something like putting it through the logistic function or anything else absolute valuing a number, it's gonna make no difference for the calculation of f of zi if I chains zj because it's just not in the equation. And so, therefore, the only terms that are actually going to occur and be non-zero are the terms where i equals j. So for working out these partial derivatives if i does not equal j, um, it's zero. If i does equal j, then we have to work out a single-variable calculus. What's the derivative, um, of the, um, activation function, um, for- and so this is what, a um, Jacobian looks like for an activation function. It's a diagonal matrix. Everything else is zero, and we thought this activation function, we work out its derivative, and then we calculate that for the difference, um, we have it for the different kind of um, zi values. Okay. Um, so that's a, um, Jacobians for an activation function. What are the other main cases, uh, that we need for a neural network? And these I'll go in through a little bit more slowly in the same lecture notes. But they're kind of similar to what we saw in the very first class. So if we are wanting to work out the partial derivatives of wx plus b with respect to x, um, what we get is w. Um, and if we want to work out the partial derivative of wx plus b with respect to b, um, that means that we get an identity matrix because b is sort of like a 1b, right? It's this almost always on vector, so you're just getting the ones coming out to preserve the b. Um, this was the case, um, that we saw, um, when we were doing the word vectors. That if you have a vector dot product of u and h and you say, what's the partial derivatives of that with respect to u, then you get out h transpose. Um, if you haven't seen those before, um, look at the lecture notes handouts, um, and see if you can compute them and they make sense at home, um, but for the moment we're gonna believe those and use those to see how we can then work out derivatives inside the neural network. Okay. So here's the same neural network we saw before. So we have a window of words, we're looking at word vectors, we're putting it through a hidden layer, and then we're just doing a vector modal, um, vector dot product, you get this final score. And so, what we [NOISE] want to do to be able to train our neural network, is we want to find out how- how s changes depending on all the parameters of the model. The x, the w, the b, the u. Um, and so we want to work out partial derivatives of S with respect to each of those because we can then work out okay if you move b up, um, the score gets better, which is good if it's actually a plus in the middle, and therefore we'll want to nudge up, um, elements of b appropriately. Okay, um, and so I'm just doing the gradient with respect to the score here and I skipped over those couple of slides. Um, so if you're just, sort of, staring at this picture and say, well, how do I work out the partial derivative of s with respect to b? Um, probably it doesn't look obvious. So the first thing here that you want to do is sort of break up the eq- equations into simple pieces that compose together, right? So you have the input x, and then that goes into z equals wx plus b, and then you compose that with the next thing. So h equals f of z, our activation function, and then this h goes into the next thing of s equals uTh. So we've got these sequence of functions. And pretty much you want to break things up as much as you can. I mean, I could have broken this up even further. I could have said z1 equals wx, z equals z1 plus b. Um, it turns out um, but if you've just got things added and subtracted, you can sort of do that in one step because that sort of pathway separating the, when doing the derivatives, but sort of anything else that composes together you want to pull it out for the pieces. Okay. So now our neural net is doing a sequence of function compositions. And when we say, okay, we know how to do that, the chain rule. So if you wanna work out the partials of s with respect to b, it's just going to be the product of the derivatives of each step along the way. So it's gonna be um the partial of s with respect to h times h with respect to z times z with respect to b and that will give us the right answer. So then all we have to do is actually compute that. Um, so, I think this just sort of shows okay we're taking the partials of each step of that composition. Okay. So now we want to compute that. And so this is where I'm going to sort of use the Jacobians that I sort of asserted without much proof on the preceding slide. Okay. So first of all um we have ds/dh. Well, that's just the dot product of two vectors. So the um, the Jacobian for that is just h transpose. Okay, that's a start. Then we have um h equals f of z. Well, that's the activation function. So the um Jacobian of that is this diagonal matrix made of the element wise um derivative of the function f. And then we have the partial of z with respect to b and that's the bit that comes out as the identity matrix. And so that's then giving us our calculation of the partial of s with respect to b. And so we can see that the- the identity matrix sort of goes away so we end up with this composition of ht times f prime of z. Okay, suppose we then want to go on and compute now the partial of s with respect to w? Well, as starting off point is exactly the same chain rule that we work out each of the stages. So, that first of all you're working out the z from the wx part then putting it through the non linearity, then doing the dot product of the vectors. So that part is the same. And what you should notice is that if you compare the partial of s with respect to w versus s with respect to b, most of them are the same and it's only the part at the end that's different. And that sort of makes sense in terms of our neural net right? That when we had our neural net that the w and the b were coming in here. And once you've sort of done some stuff with them you're putting things through the same activation function and doing the same dot product to create a score. So, you're sort of doing the same calculations that you're then composing with. So it sort of makes sense that you should be getting the same derivatives that are occur- same partial derivatives that occurring at that point. Oops. And so effectively you know these partial dev- derivatives correspond to the computations in the neural network that are above where w and b are. And so those are commonly referred to as delta, note delta which is different from partial derivative d. And so delta is referred to as the error signal and neural network talk. So, it's the what you're calculating as the partial derivatives above the parameters that you are working out the partial derivatives with respect to. So, a lot of the secret as we'll see next time, a lot of the secret of what happens with backpropagation is just we want to do efficient computation in the sort of way that's computer science people like to do efficient computation. And so precisely what we want to notice is that there is one error signal that comes from above and we want to compute it once. And then reuse that when calculating both partial derivatives with respect to w and with b. Okay. So there's sort of two things to still do. So one is well, it'd be kind of useful to know what the partial derivative of s with respect to w actually looks like. I mean, is that a number, a vector, a matrix, a three-dimensional tensor? And then we actually want to work out its values and to work out its values we're going to still have to work out the partial derivative of z with respect to w. But if first of all we just try and work out its shape, what kind of shape does it have? And this is actually sort of a bit tricky and is sort of a dirty underbelly of doing this kind of matrix calculus. So, since our weight vector is an n by m matrix, the end result of the partial of s with respect to w is we have a function with n times m inputs all of the elements of w and simply one output which is our score. So, that makes it sound like according to what I said before we should have a one by n times m Jacobian. But it turns out that's not really what we want, right? Because what we wanted to do is use what we calculate inside this stochastic gradient descent update algorithm. And if we're doing this with sort of like to have the old weight matrix and we'd like to subtract a bit format to get a new weight matrix. So, be kind of nice if the shape of our Jacobian was the same shape as w. And so we- we and in general what you always want to do with neural nets is follow what we call the shape convention which is we're going to sort of represent the Jacobian so it's in the same shape as the inputs. And this whole thing is kind of the- the bad part of the bad part of doing matrix calculus. Like there's a lot of inconsistency as to how people represent matrix calculus. That in general if you just go to different fields like economics and physics some people use a numerator convention. Some people use a denominator convention. We're using neither of those. We're going to use this shape convention so we match the shape of the input so it makes it easy to do our weight updates. Okay. So. Right. So that's what we want the answer to look like. So, then the final thing we need to do to work out on the partial of s with respect to w is we have the error signal delta that's gonna be part of the answer and then we want to work out the partial of z with respect to w. Well, um what's that going to be. Well, it turns out and I'm about to be saved by the bell here since I'm down to two minutes left. Um, it turns out that what we end up with for that is we take the product of the partial- the product of delta times x. So effectively we've got the local error signal above w. And then we have the inputs x and we are working out an outer product of them. And the sort of way to think about this is sort of for the w's. You know, we've got the elements of the w matrix, these different connections between our neurons. And so each one of these is connecting one output to one input. And so we're going to be sort of making this n by m matrix of our partial derivatives that are going to be the product of the error signal for the appropriate output multiplied by input and those goes give us the partial derivatives. I'm skipping ahead quickly in my last one minute. Okay. So uh, right. So this is sort of what I said have used the shape con- convention. I'm going to skip that. Okay. So, um, I- I ran out of time a teeny bit at the end but I mean, I think hopefully that's conveyed most of the idea of how you can sort of use the chain rule and work out the derivatives and work them out in terms of these vector and matrix derivatives. [NOISE] And essentially what we wanna do for backpropagation is to say how can we do ah get a computer to do this automatically for us and to do it efficiently. And that's what's sort of the deep learning frameworks like TensorFlow and PyTorch do and how you can do that. We'll look at more next time.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_13_Contextual_Word_Embeddings.txt
Okay hi everyone. Let's get started again. Um. Okay. So, first of all for a couple of announcements. Um, first of all thanks to everyone, um, who filled in our mid-quarter survey we've actually gotten, um, great participation in that. Here are my two little Pac-Man figures. So, the Pac-Man figures thinks, means that almost everyone thinks the lectures are at the right pace and those that don't are pretty much evenly divided. Um, if we go for how challenging was Assignment three, slightly more people thought it was too easy than too hard. So, I guess we're setting about rectifying that with assignments four and five, um, [NOISE]. So, though there are a whole bunch of other questions and we've been trying to absorb all the feedback. I mean one of the questions was what people wanted most from the remaining lectures. I guess the good news here is really we're very good at predicting, um, what people wanted, that or else everybody just looked ahead in the syllabus and wrote down what it said was ahead in the syllabus but I guess the most popular four answers to topics that they wanted in the remaining lectures were Transformers and BERT, both of which are gonna be covered this week. Uh, question-answering which we talked about last week, um, and then text generation and summarization and you guys get Abby back next week to talk about that. Um, there are also a lot of people also answered this question a different way as to what kind of style of stuff, um, some people emphasized new research and the latest updates from the field. I guess we'll get some of that today as well, some people are more interested in successful applications in industry or trying to do a bit of that, um, cool new neural architectures. Um, the bottom answer wasn't the most popular one, I'll admit but at least a few people, um, wish that we were teaching more linguistic stuff. Um, I mean that is something that I actually feel a bit awkward about the way things were merged with CS224N, with this deep learning, I mean the truth of the matter is that sort of seems like in the early part of the course, there's so much to cover with, um, neural networks, backpropagation, different, um, neural net architectures and so on that the reality is that we teach rather less linguistic stuff than we used to in the class. I mean, for the last four weeks of the class we really do try and cover some more linguistic stuff topics. Um, so look forward to that. Um, announcements. Okay. So we've made a couple of deadline changes. Um, firstly, a number of people have mentioned that they think assignment five is a bit tough. And so, we're giving people one extra day, um, to do assignment five. Um, I'm realizing in one sense that one extra day is not a ton but you know there's sort of this complex balance here because on the other hand, we don't really want to undermine time that people have available for final projects. And if you're one of the people who hasn't yet started assignment five, um, we do really encourage you to get underway on it. Um, yeah, in the reverse direction we decided that the project milestone was really too late. If we are going to be able to give you feedback on it that you could usefully make use of, so we're moving the project milestone date two days earlier. And so, we've also gotten everyone's project proposals and our planned hope is to get them back to everybody on Friday. Yes, so, a lot of things moving. Um, and finally on other announcements I guess, um, on this Thursday is our first invited speaker, um, and so, if you're in person student you're meant to be here, um, and if you're not able to be here, you should know about our reaction paragraph policy and I actually stuck up on the Piazza pinned posts about, um, reaction pieces and attendance, an example of a reaction piece, um, from a past class to make it a little bit more concrete what's expected there. But, you know, the idea is what we're hoping for something that isn't a ton of work. You can just write 100, 150 words, a few sentences, but wanting you to pick out a specific thing that was interesting and write a couple of sentences about what it was and what your thoughts are about it. I, not just some very generic statement of this was a lecture about transformers. He talked about transformers and it was interesting, that is not what we want for the reaction piece. Um, okay. So, here's the plan for today. So, for today's, what I want to talk about is, um, the exciting recent work about contextual word representations. I mean I, I was thinking of what I was gonna say I was wanting to say, oh, this is the most exciting thing in deep learning for NLP in the last five years then something's just completely wrong, because really this is the most exciting thing in deep learning that happened in 2018. I mean, I guess things move very quickly, um, in deep learning at the moment and it's sort of I don't think it's really fair to say that you know it's got 5 years of life. But there's a very exciting thing that happened last year, and we'll talk about that. Okay. So, we'll talk about early stuff, the ELMo, ULMfit, transformer architectures briefly and then go on to talk about the BERT model that's being quite prominent lately. So, let's just recap, let's just go backwards a bit first to think about, um, where we've been and where we are now and why we might want something more. So, up until now, we've sort of just had, one representation for words which is what we learned at the beginning of class, there was a word, you trained a word vector for it and that's what you used in your model. Um, and you could do that, with algorithms like Word2vec, GloVe, or fastText that I mentioned last week. Um, so some on this sort of progression of ideas in deep learning, when deep learning for NLP or the general just the resurgence of neural networks for NLP came about sort of at the beginning of this decade. Um, these pre-trained word vectors. So, pre-trained unsupervised over a large amount of text. They were completely seen as the secret sauce, and they were the thing that transformed neural networks from NLP to something that didn't really work, to something that worked great. Um, so, this is actually an old slide of mine. So, this is a slide I guess I first made for 2012 ACL tutorial and then sort of used in lectures. Sort of in 2013, 2014. Um-. And so this was sort of the picture in those years. So this was looking at two tasks, part of speech tagging and named entity recognition which I'll use quite a bit today. And, you know, the top line was showing a state of the art which was a traditional categorical feature based classifier of the kind that dominated NLP in the 2000s decade, in their performance. And what then the next line showed is that if you took the same data set and you trained a supervised neural network on it and said how good is your performance? Um, the story was, it wasn't great. Um, part-of-speech tagging has very high numbers always for various reasons. So perhaps the more indicative one to look at is these named entity recognition numbers. So, you know, this was sort of neural net sucked, right? The reason why last decade everybody used, um, categorical feature based, you know, CRF, SVM kind of classifiers. Well, if you look, it worked eight percent better than a neural network. Why wouldn't anybody? But then what had happened was people had come up with this idea that we could do unsupervised pre-training of word representations, um, to come up with word vectors for words. And, you know, in those days, this was very hard to do the alg- both because of the kind of algorithms and the kind of machines that were available, right? So Collobert and Weston, 2011, spent seven weeks training their unsupervised word representations. And at the end of the day, there are only 100 dimensional, um, word representations. But this was the miracle breakthrough, right? You've put in this miracle breakthrough of unsupervised word representations. And now, the neural net is getting to 88.87. So it's almost as good as the feature-based classifier, and then like any good engineers, they did some hacking with some extra features, because they had some stuff like that. And they got a system that was then slightly better than the feature based system. Okay. So that was sort of our picture that, um, having these pre-trained, unsuper- and unsupervised manner of word representations, that was sort of the big breakthrough and the secret sauce that gave all the oomph that made, um, neural networks competitive. Um, but, you know, it's a sort of a funny thing happened which was after people had sort of had some of these initial breakthroughs which were all about unsupervised methods for pre-training, it was the same in vision. This was the era in vision, where you were building restricted Boltzmann machines and doing complicated unsupervised pre-training techniques on them as well. Some- somehow, after people had kind of discovered that and started to get good on it, people sort of started to discover, well, actually we have some new technologies for non-linearities, regularization, and things like that. And if we keep using those same technologies, we can just go back to good old supervised learning. And shockingly, it works way better now inside neural networks. And so if you sort of go ahead to what I will call, sort of 2014 to 2018 picture, the, the picture is actually very different. So the picture is, so this, the results I'm actually gonna show you this is from the Chen and Manning, um, neural dependency parser that we talked about weeks ago. The picture there was, um, and you could- despite the fact that this dependency parser is being trained on a pretty small corpus, a million words of supervised data, you can just initialize it with random word vectors, um, and train a dependency parser. And to a first approximation, it just works fine. You get, get sort of a 90 percent accuracy, E- um, English dependency parser. Now, it is the case that instead, you could use pre-trained word embeddings and you do a bit better. You do about one percent better. And so this was sort of the, the new world order which was yeah, um, these pre-trained unsupervised word embeddings are useful because you can train them from a lot more data and they can know about a much larger vocabulary. That means they are useful. They help with rare words and things like that and they give you a percent, but they're definitely no longer the sort of night and day, uh, thing to make neural networks work that we used to believe. I'm, I'm just gonna deviate here to, from the main narrative to just sort of say, um, one more tip for dealing with unknown words with word vectors, um, just in case it's useful for some people, building question answering systems, right? So, um, so for sort of word vectors on unknown words, you know, the commonest thing historically is you've got your supervised training data, you define a vocab which might be words that occur five times or more in your supervised training data. And you treat everything else as an UNK. And so you also train one vector per UNK. Um, but that has some problems which you have no way to distinguish different UNK words either for identity or meaning. And that tends to be problematic for question answering systems. And so one way to fix that is what we talked about last week, you just say, "Oh, words are made out of characters. I can use character representations to learn word vectors for other words." And you can certainly do that. You might wanna try that. That adds some complexity. Um, but especially for things like question answering systems, there are a couple of other things that you can do that work considerably better and they've been explored in this paper by Dhingra et al., um, from 2017. Um, the first one is to say, well, um, when you at test-time encounter new words, probably your unsupervised word, pre-trained word embeddings have a much bigger vocabulary than your actual system does. So anytime you come across a word that isn't in your vocab but is in the pre-trained word embeddings, just use, get the word vector of that word and start using it. That'll be a much more useful thing to use. And then there's a second possible tip that if you see something that's still an unknown word, rather than treating it as UNK, you just assign it on the spot, a random word vector. And so this has the effect that each word does get a unique identity. Which means if you see the same word in the question, and a potential answer, they will match together beautifully in an accurate way which you're not getting with just UNK matching and those can be kind of useful ideas to try. Okay, end digression. Okay, so up until now, we just sort of had this representation of words, we ran Word2vec and we got a word vector, um, for each word. Um, so, um, that, that was useful. It's worked pretty well. Um, but it had, um, some big problems. So what were the big problems of doing that? The problems when we, of having a word vector in each word, yes. A lot of words have like one spelling, but a whole bunch of meanings. Right, so, a word can have- So, typically, you have one string of letters which has a whole bunch of meanings. So, words have a ton of senses. Um, and yeah, so that's the biggest and most obvious problem that we're collapsing together all the meanings of words. So, we talked about a bit where one solution to that was you could distinguish word senses and to have different word vectors for them. Um, and I then said something about also you could think of this word vector as a sort of a mixture of them and maybe your model could separate it. But it seems like we might want to take that more seriously. And one way, um, that we could take that more seriously is we could start to say, well, really, you know, traditional lists of word senses are themselves a crude approximation. What we actually want to know is the sense of the word inside a particular context of use. And sort of what I mean by that is, you know, we distinguish different senses of a word, right? Say for the word star there's the astronomical sense and there's the Hollywood sense and they're clearly different. But you know, if we then go to this what I'm calling the Hollywood sense, I could then say, well, wait a minute. There are movie stars and there are rock stars, and there, uh, are R&B stars, and there are country stars. Now, all of those different senses, um, in certain contexts, though, one or other of them would be evoked. And so, you know, it's very hard if you're trying to actually enumerate senses of a word as to which ones count as different or the same. So, it's really you sort of wanna know what a word means in a context. There's a second limitation of these word vectors which is, we haven't really talked about and is less obvious, but it's also something that we might want to fix, and at least one of the models we discussed today takes some aim at that, and that is, we just sort of have one vector for a word. But there are sort of different dimensions of a word. So, words can have different meanings, some sort of real semantics or words can have different syntactic behavior like different parts of speech or grammatical behavior. So, in some sense, arrive and arrival, their semantics are almost the same, but they're different parts of speech. One is a, um, a verb and one is a noun, so they can kind of appear in quite different places. And you know, you'd wanna do different things with them in a dependency parser. And there are even other dimensions. So, words also have register and connotation differences. So, you can probably think of lots of different words for a bathroom, and a lot of those words all means semantically the same, but have rather different registers and connotations as to when they're appropriate to use. And so, we might want to distinguish words on that basis as well. And so these are the kinds of soluti- things we want to solve with our new contextual word embeddings. Um, so I've said up until now, you know, oh, we just had these word vectors that we use, words just had one vector. Um, but if you actually think about it, maybe that's wrong. I mean, maybe we never had a problem, or at any rate, we solved it six classes ago. Because if you remember back, [NOISE] um, to when we started talking about neural language models, well, what did a neural language model do? At the bottom, you fed into it the word vectors. But then you ran across that one or more recurrent layers, something like a LSTM layer, and it was calculating these representations that sit above each word and, you know, the role of those hidden states is a bit ambivalent. They are used for prediction. And they are used for next hidden state and output states and so on. But in many ways you can think huh, these representations are actually representations of a word in context. And if you think about what happened with, uh, the question answering systems, that's exactly how they were used, right? We ran LSTM's backwards and forwards, over a question in the passage, and then we say, okay those are a good representation of a word's meaning and context. Let's start matching them with attention functions et cetera. So, it sort of seemed like we'd already invented a way to have, um, context-specific representations of words. And effectively, you know, the rest of the content of this lecture is sort of basically no more complex than that. Um, that it took a while but sort of people woke up and started to notice, huh, really when you're running any language model, you generate a context-specific representation of words. Maybe, if we just took those context-specific representation of words, they'd be useful for doing other things with them. And that's sort of, you know, there are a few more details, but that's really the summary of the entire of this lecture. Um, so one of the first things to do that was a paper that Matt Peters wrote in 2017, um, the year before last. Um, and this was sort of a predecessor to the sort of modern, um, versions of, um, these context-sensitive word embeddings. So, um, together with co-authors, he came up with a paper called TagLM, but it essentially already had all the main ideas. So, what, um, was wanted was okay. We want to do better at tasks such as named-entity recognition. And what we'd like to do is know about the meaning of a word in context. Um, but you know, standardly if we're doing named-entity recognition, we just train it on half a million words of supervised data. And that's not much of a source of information to be learning about the meaning of words and context. So, why don't we adopt the semi-supervised approach and so that's what we do. So, we start off with a ton of unlabeled data. Um, and from that unlabeled data, we can train a conventional word embedding model like Word2vec. But we can also at the same time train a neural language model. So, something like a bi-LSTM language model. Okay. So, then for step two when we're using our supervised data, um, actually, I guess that's step three. Okay. Um, so for then when we want to learn our supervised part-of-speech tagger at the top, what we're gonna do is say, well, for the input words New what York is located, we can not only use the word embedding which is context independent, but we can use our trained recurrent language model and also run it over this import, and then we'll generate hidden states in our bi-LSTM language model and we can also feed those in as features into ou- our sequence tagging model, and those features will let it work better. Here's a second picture that runs this through in much greater detail. So, so, we're assuming that we have trained, uh, bi-LSTM language model, um, on a lot of unsupervised data. Then what we wanna do is we want to do named entity recognition for New York is located. So, the first thing we do is say, let's just run New York is located through our separately trained neural language model. So, we run it through a forward language model. We run it through a backward language model. We get from that, um, a hidden state representation, um, for each word, we concatenate the forward and backward ones, and that's going to give a set, a concatenated language model embedding which we'll use as features in our named entity recognizer. So, then for the named entity recognizer itself that we're gonna train supervised while we have the same sentence, so we can both look up a Word2vec-style token embedding for it. We can use what we learned about with character level CNNs and RNNs and we can build a character level representation for it which we also concatenate to have two representations. So, we feed these representations into a bi-LSTM layer. But then when we get the output of the, this bi-LSTM layer, as well as this normal output, we can concatenate with each output what was- what we get from our, um, neural language model. So, each of these things becomes a pair of states. One that's spit up from the first bi-LSTM layer and then it's concatenated with something from the neural language model. And so that concatenated representation is then fed into a second layer of bi-LSTM. And then from the output of that, we do the usual kind of softmax classification where we're then giving tags like beginning of location, end of location, say New York is a location and then is, we'll get another tag to say it's not a location. Does that makes sense? Yeah so, um, so the central thing is sort of having seen that these sort of representations that we get from Bi-LSTMs are useful. We're just going to feed them into supervised models as we train them, and the idea is that will give us better features of words. Some kind of representation of their meaning and context, which will allow us to learn better named entity recognizers or what it- whatever it is. Maybe I should put this slide earlier, but this slide was meant to remind you what a named entity recognizer is. I hope you remember that, something where are we going to find and label entities for things like person, location, date, organization. So anyway, doing this worked. So, here's a little bit of a history. So the most famous Named Entity Recognition dataset is this CoNLL 2003 dataset, which actually exists in multiple languages. But whenever people say CoNLL 2003 and don't mention a language, they mean the English version of it. That's the way the world works. Um, okay so on this dataset- yeah. So, it's sort of been around for whatever, 15 years roughly now. So, in the- so it was originally a competition, right? So, this is in 2003 was the original bake-off. My group actually took place in that. Took part in it. I think we got third or fourth place or something, and our F1 score was 86. The people who won were from IBM Research Labs, and they got 88 almost 89. But a difference between these two things is our system was a single clean machine-learning model categorical, whereas the IBM one was not only an ensemble of four different machine learning models, plus gazetteers. It also fit in the output of two other old NER systems that IBM people were trained years ago on different data. So it was- I guess it worked for them but, it was a fairly complex system. Here's another system from Stanford. So this was our classic Stanford NER system that is widely used. So, this was then using a conditional random field model which generally dominated sort of the second half of the 2000s and the first half of the 2010s for doing NER, and it was sort of, you know, a bit but not usually better than the 2003 system. This system here was sort of the best ever built categorical CRF system. But rather than only using the training data to build the model as this system did, it threw in Wikipedia and other stuff to make it work better, and that got you to about 90.8 F1. So, essentially, once sort of BiLSTM style models started to be known and used in NLP. That was when people were able to train, build training just on the training data systems that worked a lot better. Because essentially you're going from the same data from this system to that system. So, you're getting about 4 percent gain on it, because it's not- wasn't making use of Wikipedia and things like that; and so this Ma and Hovy system is pretty well-known getting about 91.21. Okay, but if we then go to this TagLM system, um, that Matt Peters and Co have a system that was sort of similar to the Ma and Hovy system that is a little bit worse. But the point is that this BiLSTM uses sorry- using the neural language model, is just a useful oomph giver which sort of takes the results up. Yeah, not night and day but, slightly over a percent and then gives them the best NER system that was then available. So that sort of proved these sort of contextual word representations really had some power and started to be useful, and then there's a white space at the top because we'll get back to more of this later. Um, there's some details on their language model. Some of their details are that it's useful to have a bidirectional language model, not unidirectional. It's useful to have a big um, language model to get much in the way of gains, um and, you need to train this language model over much more data. It doesn't work if you're just sort of training it over your supervised training data. Another model that was around was CoVe, but I think I'll skip that. Okay. So, then the next year, um, Matt Peters and a different set of colleagues then came up with an improved system called ELMo, and effectively this was the breakthrough system. That this was sort of just the system that everybody noticed and said "Wow these contextual word vectors are great. Everyone should be using them, not traditional word vectors." Yes? I have a simple question, imagine re-training a system, what exactly what measure [inaudible] It's pre-trained because this piece over here; a big neural language model is trained first, and there's an important thing I forgot to say. So, thank you for the question. The main reason why it's- in some sense pre-trained, is this was trained first. But the main reason why people think of this as pre-training is after you've trained this, it is frozen. So, this is just something that you can run with parameters which will give you a vector which is your contextual word representation each position, and then that's just going to be used in this system. So, when you're training this system, there's no gradient flowing back into this neural language model that's changing and updating it; it's just fixed. And so that's sort of the sense when people are talking about pre-training. It's sort of normally a model that you trained somewhere else and that you're using to give features, but isn't part of the model that you are now training. Yeah? [inaudible] Well, I guess that's, I wouldn't quite call it reconstruction. Yeah, it's unsupervised in the sense that this is a language model, you're training it to predict the next word. So here are words one to k. What is the k plus oneth word during a cross entropy loss, and repeat over for each position. [NOISE] Yes, so I mean, having gone through TagLM in some detail, I mean, in some sense, the difference between TagLM and ELMo is kind of small, it's sort of in the details. So I mean, to a first approximation, they're doing exactly the same again, but a little bit better. Um, so, um, I sort of hope it made sense the last time, I mean, what are the things that are different? Um, they do the bidirectional language model a bit differently, and actually one of their concerns was to try and come up with a compact language model that would be easy for people to use, um, in other tasks even if they don't have the beefiest computer hardware in the world. And so they decided to dispense with having word representations altogether and just use, um, character CNNs to build word representations, because that lessens the number of parameters you have to store, the big matrices you have to, um, use. Um, they expanded the hidden dimension to 4,096, but then they project it down to 512 dimensions with a sort of feed-forward projection layer, and that's a fairly common technique to again reduce the parameterization of the model so that you have a lot of parameters going in their current direction but you need much smaller matrices for including, um, the input at the next level. Um, between the layers, they now use a residual connection and they do a bit of parameter tying. So it's sort of all in the little details there. Um, but there's another interesting thing that they did which was an important innovation of ELMo, so we should get this bit. So in TagLM, what was fed from the pre-trained LM into the main model was just the top level of the neural language model stack, and that was completely standard de rigueur in those days, that you might have had three layers of neural language model that you regard at the top-level as your sort of one that's really captured the meaning of the sentence and the lower layers for processing that led up to it. Um, and they had the idea that maybe it would be useful to actually use all layers of the, biLSTM of the neural language models. So maybe not just the top layer but all layers would be kind of useful. So, um, there are these kind of complex equations, uh, but essentially the point of it over here is, we going- for a particular position, word seven in the language model, we're going to take the hidden state at each level of our, our neural language model stack, we're going to give- learn a weight for that level, we go in to sort of sum them, so this is sort of a weighted average of the hidden layers at each position, and that will be used as our basic representation. Um, and so, they found that that gave quite a bit of extra usefulness for- and different tasks could prefer different layers. There's one other bit here which is, they learn a global scaling factor Gamma for a particular task. And this allows them to control that for some tasks, the, um, contextual word embeddings might be really useful and for other tasks they might not be so useful, so you're just sort of learning a specific, um, usefulness for the entire task. Okay. So, um, that's the sort of new version of language model. But this, this is allowing this idea of well, maybe there's sort of more syntactic meanings of a word and more semantic meanings of a word, possibly those could be represented at different layers of your neural language model and then for different tasks you can differentially weight them. Um, so that's the basic model. So you run your biLSTM before to g et representations of each word. And then the generic ELMo recipe was, well, with that frozen language model, you want to feed it into some supervised model depending on what the task was, and they sort of say in the paper, well, how you do this maybe depends on the task. You might want to kind of concatenate it to the intermediate layer, just as the TagLM did, that might be fine. But you know it might also be useful to make use of these ELMo representations when producing outputs, so if you're doing something like a generation system or you might just sort of feed in the ELMo representation again, be- before you sort of do the softmax to find the output, they sort of left it flexible as to how it was used, but the general picture, you know, was kinda like we saw before. Indeed I'm reusing the same picture that you've calculated an ELMo representation for each position as a weighted average, and then you're sort of concatenating that to the hidden state of your supervised system and generating your output. And anyway, um, one way or another, um, they were able to do this, uh, and that with the little improvements that gave them about an extra 0.3 percent in Named Entity Recognition. Um, now, that sort of sounds like not very much. And you might conclude from this why the excitement [LAUGHTER] and, you know, in some sense, um, that's right because sort of to the extent that there was an interesting idea here really that come up with it for the TagLM paper which gave a much better gain. But, you know, why everyone got really excited was that in the ELMo paper, they then showed this isn't something that you can do one-off to improve a Named Entity Recognizer, you can take these ELMo representations and use them for pretty much any NLP task, and they can be very useful and give good gains. And so, essentially why people got excited was because of the data that's in this table. So here we're taking a whole bunch of very different tasks, so there's SQuAD question-answering, uh, there's natural language inference, there's semantic role labeling, there's co-reference, the Named Entity Recognition, doing sentiment analysis, so a wide range of different NLP tasks, and they have a previous state of the art system. They produced their own baseline um, which is, you know, commonly sort of similar to the previous state of the art, but usually actually a bit worse than the current state of the art because it's whatever simpler cleaner system that they came up with, but then they could say in each case, oh, just take this system and add ELMo vectors into the hidden representations in the middle, and have those help you predict. And in general, in all cases, that's giving you about a three percent or so gain absolute which was then producing this huge performance increase, which in all cases was moving the performance well above the previous, um, state of the art system. So you know, this sort of then made it seem like magic pixie dust, because, you know, in the stakes of NLP conference land, you know, a lot of people use to try and to come up with a paper for the next year that's one percent better on one task and writing it up and that's their big breakthrough for the year to get their new paper out. And the idea that there's just well this set of this way of creating context sensitive, um, word representations and you just use them in any task, and they'll give you around three percent and take you past the state of the art, this seemed like it was really great stuff. And so people got very excited about this and that won the Best Paper Award at the NAACL 2018 conference. Ah, and then, a- as I sort of vaguely mentioned, um, so the model that they actually used wasn't a deep stack, there were actually only two layers of biLSTMs, but they do show this interesting result that the lower level better captures low-level syntax word properties and its most useful things like part-of-speech tagging, syntactic dependencies, NER, where the top layer of their language model is better for higher level semantics that is more useful for things like sentiments, semantic role labeling and question answering. Um, so that seemed interesting, though it'll actually be interesting to see how that panned out more if you had sort of more layers to play with. Okay. ELMo, done. Um, so I'm moving right ahead. Um, here's something else that I just thought I should mention a little bit about, another piece of work that came out around the same time, a few months later maybe or maybe not, came out around the same time, uh, in, in 2018, was this work on Universal Language Model Fine-tuning for text classification, um, or ULMfit, by Howard and Ruder. And essentially this had the same general idea of saying, Well, what we want to do is transfer learning where we could learn a big language model, um. A big language model, and then for our target task which might be named entity recognition. But here's text classification, we can transfer this language model information and help us to do better with the task. And so, they proposed an architecture to do that. And so, their architecture was, you have a big unsupervised corpus from which you train a neural language model. They used the deeper neural language model with three hidden layers. Um, you then fine tune your neural language model on the actual domain that you're interested in working in. So, this was sort of an extra stage that they did. And then finally, um, you now introduce your classification objectives. So, what they're going to be doing is making text classifiers. So, we're now wanting to, take this model and turn it from a language model into a text classifier. Um, but there's something that they did differently, um, which is in some sense, foreshadows the later work in transformers. So, rather than just feeding features from this into a completely different network, they keep using the same network but they introduce a different objective at the top. So, one thing you could do with this network is use it to predict the next word as a language model. And so at this point, they freeze the parameters of that softmax at the top, that's why it's shown in black. Um, but instead, they could stick on a different prediction unit where it's predicting stuff for a particular task. So, it might be predicting positive or negative sentiment in a text classification task or something like that. So, in their model, they're sort of reusing the same network but sticking on the top of that, a different layer, to do the new classification task. Um, they were also interested in something small, the sort of one GPU model of research, um, the paper has a lot of detail, the sort of tricks and care and feeding of your neural models to maximize performance. If you're interested in that, you could sort of look up some of the details about that. Um, but what they were able to show again, was making use of this language model pre-training was a very effective way to improve performance, this time for text classification. So, these are text classification datasets, IMDb is for sentiment, um, TREC is for topical text classification, and again, there are preceding systems that other people have developed and they are showing that by making use of this language model pre-training, they're able to significantly improve on the state of the art of these error rates, so that low is good. They also showed another interesting result which is kind of, um, what you would expect or hope from doing this kind of transfer learning, that what they were able to show is, if you can train this neural language model on a big amount of data, that that means you will then be able to do well on your supervised task even when trained on pretty little data. Um, so, here this is error rate, so low is good. So, what the- and here's the number of training examples which has being done on a log scale. And so the blue line is if you're just training a text classifier from scratch on supervised data. So, you need a lot of data to start to do pretty well. Um, but if you're making use of this transfer learning, um, from a pre-trained language model, you can get to that you're sort of doing pretty well with way less, um, training examples. Essentially, an order of magnitude, less training examples will give you the same amount of performance. And the difference between these two lines corresponds to the extra, um, phase that they had in the middle of theirs, um, which is, whether you're doing this sort of extra fine tuning on your target domain, um, it's part of your process and they found that to be pretty helpful. Okay. So, that, um, is another precursor. Um, and so, one big part of what has happened since then, is effectively people said this is a good idea, uh, maybe it'll become a really really good idea if we just make things way bigger. Um, so, ULMfit, um, was something that you could train in one GPU day, sounds appealing for CS224N final projects, remember that, um, and but well, then the people at OpenAI decided, well, we could build a pretrain language model and train it on a much larger amount of data on a much larger amount of compute, and use about 242 GPU days and that will get a lot better, and it did. Um, and then the people at Google said, well we could train a model, um, in to 256 TPU days, which means maybe about double the amount of computation. It's hard to figure out exactly, and that might be able to do exciting things, and that was the BERT model, and it did. Um, and then if you're following along these things, um, just last week, um, the OpenAI people said, well we can go much bigger again and we can train a model, um, for approximately 2,000 TPU version three days. Um, and it will be able to, um, do much bigger again, a bit much better again, um, and so, this is this GP2, GPT-2 language model, um, which OpenAI released last week. Um, and they're, they're actually very impressive results, um, when they're showing that if you're sort of building a really, really huge language model over a very large amount of data. And then you say language model go off and generate some text, on this particular topic, that it can actually just do a great job of producing text. So, the way this was being do- done, was a humanist writing a couple of sentences; in a shocking finding, scientists discovered a herd of unicorns, living in remote previously unexplored valley in the Andes Mountains. Um, and so, we then, using our neural language model and chugging through that, so that gives us context, and then say generate more text, and it starts to generate the scientist named the population after their distinctive horn, Ovid's Unicorn, these four-horned, silver-white Uni four corns were previously unknown to science. Um, it produces remarkably, um, good text or at least in the, in the hand-picked examples [LAUGHTER] that they showed in the tech news, um, it produces extremely good text. Um, yeah so, I think one should be a little bit cautious about, um, that and sort of some of its random outputs actually aren't nearly as good but nevertheless you know, I think is is actually dramatic how good language models are becoming once you are training them on long contexts as we can do with modern models on vast amounts of data, um-. So then, um, the OpenAI people decided this language model was so good that they weren't gonna release it to the world, um, which then got transformed into headlines of, Elon Musk's OpenAI builds artificial intelligence so powerful, it must be kept locked up for the good of humanity. [LAUGHTER] Um, with the suitable pictures that always turn off at these moments down the bottom of the screen, um, and, um, yeah I guess that was the leading even Elon Musk to be wanting to clarify and say that it's not actually really that he's directing what's happening at OpenAI anymore. Um, anyway, moving right along. Um, so, part of the story here is just a scaling thing that these things have been getting bigger and bigger, um, but the other part of the story is that all three of these are then systems that use the transformer architecture. And transformer architectures have not only being very powerful, but technically had allowed scaling to much bigger sizes. So to understand some of the rest of these, um, we should learn more about transformers. And so, I'm sort of gonna do that, um, but I mean, um, in mix of orders, um, our invited speaker coming Thursday uh, is, um, one of the authors of the transformer paper, and he's gonna talk about transformers. So I think what I'm gonna do is, um, say a little bit about transformers quickly, but not really dwell on all the details, um, but hope that it's a bit of an introduction, and you can find out more on Thursday about the details and then talk some more about the BERT model before finishing. So the motivation for transformers is essentially we want things to go faster so we can build bigger models, and the problem as we mentioned for these, um, LSTM or in general any of the recurrent models is the fact that they're recurrent. You have to generate sort of one to n status time chugging through, and that means you just can't do the same kind of parallel computation, um, that GPUs love that you can do in things like convolutional neural networks. But, you know, on the other hand, we discovered that even though, um, these gated recurrent units like LSTMs and GRUs are great, that to get really great performance out of these recurrent models, we found that we wanted to- we had a problem within these long sequence lengths, and we can improve things by adding attention mechanisms. And so that led to the idea of- well, since attention works so great, maybe we can just use attention, and we can actually get rid of the recurrent part of the model [NOISE] altogether. And so that actually then leads to the idea of these transformer architectures, and the original paper on this is actually called attention is all you need, which reflects this idea of we're gonna keep the attention part, and we're getting- going to get rid of the, um, recurrent part, and we'll be able to build a great model. So in the initial work, what they're doing is machine translation kind of like the Neural Machine Translation with attention we described, but what they're wanting to do is build a complex encoder and a complex decoder that works non-recurrently, and, um, nevertheless is able to translate sentences well by making use of lots of attention distributions. And so, I wanted to say a little bit more quickly about that, and hopefully we'll get more of this on Thursday. Um, first as a- as a recommended resource, if you wanna look at, um, home and learn more about, um, the transformer architecture, there's this really great, um, bit of work by Sasha Rush called The Annotated Transformer that goes through the entire transformer paper accompanied by PyTorch code in a Jupyter Notebook, and so that can actually be a really useful thing, but I'll go through a little bit of the basics now of how we do things. So the basic idea, um, is that they're going to use attention everywhere to calculate things. And, um, we talked before about the different kinds of attention of the sort of multiplicative by linear attention and the little, um, feed-forward network additive attention. They kind of go for the simplest kind of attention, where the attention is just dot-products between two things. Um, but they sort of do the more comp- for various purposes, they do the more complicated version of dot-product between two things where they have, um, when the- the things that they're looking up are assumed to be key-value pairs, keys and values, and so you're calculating the similarity as a dot-product between a query and the key, and then based on that, you're going to be using the vector for the corresponding value. So our equation here for what we're calculating is where you are looking using the softmax over query, um, key similarities and using that to give the weightings as an attention based weighting over the corresponding values. Um, so that's the basic attention model. Um, so that add- saying it that way, um, adds a little bit of complexity, but sort of for the simplest part for their encoder. Actually, all of the query keys and values are exactly the same. They are the words, um, that they're using as their source language, um, things. So, it sort of adds some complexity that isn't really there. Um, okay. Um, I'll skip that. Um, so, there are a couple of other things that they do. One thing that they note is that, um, the- the values you get from, um, QTK, um, very, in variances the dimension gets large so that they sort of do some normalization by the size of the hidden state dimension, but I'll leave that out as well for details, right. So in the encoder, um, everything is just our word vectors, there are the queries, the keys, and the values. Um, and we're gonna use attention everywhere in the system. Oops. Okay. So the second new idea is, well, attention is great but maybe it's bad if you only have one attention distribution, because you're gonna only attend to things one way. Maybe for various users it would be great if you could attend from one position to various things. So, if you're thinking about syntax and what we did with dependency parsers. If you're a word, you might want to attend to your headword, but you might also wanna attend- attend to your dependent words. And if you happen to be a pronoun, you might want to attend to what the pronoun refers to you. You might want to have lots of attention. So they introduced this idea of multi-head attention. And so what you're doing with multi-head attention is you have, um, your hidden states, um, in your system, and you map them via projection layers, um, which are just multiplications by different W matrices as linear projections into sort of different lower dimensional spaces, and then you use each of those to calculate dot-product attention, and so you can attend to different things at the same time. And this multi-head attention was one of the very successful ideas of transformers that made them a more powerful architecture. Okay. Um, so, then for our complete transformer block, it's sort of then starting to build complex architectures like we sort of started seeing, um, the other week. Um, so- okay. Yeah. So, starting, um, from our word vectors, we're kind of going to do attention to multiple different things, um, and we're simultaneously gonna have a residual connection that short-circuits around them. Um, we're then going to sort of sum the two of these, and then they're going to do a normalization at that point. Um, I talked previously about batch normalization, they don't do batch normalization, they do another variant which is layer normalization, which is a different way of doing normalization, but I'll skip that for now. And then they sort of for one transformer block, you then go after the multi-head attention, you put things through a feed-forward layer which also has a residual connection, you sum the output of those, and you then again do another, um, layer normalization. So this is the basic transformer block that they're gonna use everywhere. And to make their complete architectures, they're then gonna sort of start stacking these transformer blocks to produce a very deep network. And in some sense, what has been found is that transformers performed very well. But, you know, there's no free lunch, um, you kind of can't. You're- now, no longer getting recurrent information actually being carried along a sequence. You've got a word at some position which can be casting attention, uh, on other words. So if you'd like to have information carried along in a chain, you've sort of first of all gotta walk the first step of the chain, and then you need to have another layer vertically which can walk the next step of the chain, and then you need to have another layer vertically that walks the next step of the chain. So, you're getting rid of the recurrence along the sequence, but you're substituting some depth to allow things to walk along multiple hops. But nevertheless, that's highly advantageous in GPU architectures because it allows you to use parallelization to calculate everything at each, um, depth at the same time. Um. Maybe I'll go light on explaining this as well. Um, so they use byte-pair encodings. But if you do nothing else, you just have words fed in this word vectors and you have no idea whether you're at the beginning of the sentence or at the end of the sentence. Though, they have a message of- method of doing positional encoding which gives you some ideas to pro- position your word has in the sentence. Okay. Um, so that's sort of the, um, encoder system. So from the words, they have an initial word embedding, you add in their positional encoding, you go into one of these transformer blocks, and you then repeat it n times. So you'll have a stack of these transformer blocks. So you're multiple times doing, um, multi-head attention to other parts of the sentence, calculating values, feeding forward a value, putting it through a fully-connected layer, and then you just sort of repeat, do attention to different places in the sentence. Get all your information, put it through a fully connected layer, and go up, um, proceeding up deeply. And and that sounds a little mysterious, but it turns out to work just great. And the way to think about, I think is that at each stage, you can look with your multi-headed attention and various other places in the sentence, accumulate information, push it up to the next layer. And if you do that sort of half a dozen times, you can be starting to progressively push information along the sequence in either direction to calculate values that are of interest. Um, and the interesting thing is that these models turn out to work really well at sort of learning to attend the interesting things in linguistic structure. Um, so these are just sort of suggestive diagrams, but this is looking at layer five of the transformer stack and seeing what words are being attended to by different attention heads. So these different colors correspond to different attention heads. And so the sentence is, um, it is, "In this spirit, that a majority of American governments have passed new laws since 2009 making the registration or voting process more difficult." And so what we see is sort of most of the attention heads, uh, looking from making to making more difficult and that seems to be useful. One of the attention heads seems to be looking at the word itself might be okay. Um, then the other ones are sort of looking a bit at laws and at 2009. So it's sort of picking out the arguments, um, and modifiers and making in a syntax kind of like way. Um, interestingly, for pronouns, attention heads appear to learn to be able to look back to reference. So the law will never be perfect, but its application should be just that one attention head it for its, is looking at what its is modifying in the application. But another attention head, the its is looking strongly at what its refers back to as the law. So that seems kind of cool. Um, yeah. Um, okay. And so then, for the rest of the model, um, there's then some more complexity for how to use the transformers decoder to give you a full neural machine translation system. But I think maybe I will skip that and go on and say a bit about BERT in my remaining minutes. Okay. So, um, the latest and greatest contextual word representations to help you flow your tasks have been these BERT vectors, where BERT is Bidirectional Encoder Representations from Transformers. And so essentially, it's using the encoder from a transformer network. Uh, this deep multi-headed attention stack to calculate, um, a representation of a sentence and saying, "That's a great all-purpose representation of a sentence that you can use for tasks. Be it named entity recognition or SQuAD question answering." And so there's actually an interesting new idea that these people had. And that well, their idea was well standard language models are unidirectional and that's useful because it gives you a probability distribution of a language model. But it's bad because you'd like to be able to do prediction from both sides to understand word meaning and context. There's a second choice, um, which is you can kind of do bidirectional models when you incorporate, um, information in both ways. But that sort of has problems as well, because then you get crosstalk. Um, and so if you run a BiLSTM, and then you merge the representations by concatenation and then feed them into the next layer. When you're running the next layer, the forward LSTM will have already gotten information about the future from the first layer. Um, so it sort of, um, ends up with words that have already seen the future themselves. So you have this sort of complex non-generative model. Um, so somehow, they wanted to do things a bit differently, so they can have bidirectional context without words being able to see themselves. And the idea that they came up with is well, we're gonna train things with a transformer encoder. But what we're gonna do is mask out some of the words in the sentence, like, maybe we'll mask here store and gallon. And then, so our language mod- our language modelling like objective will no longer be a true language model that's sort of generating a probability of a sentence, um, which is standardly done by working from left to right, but it will instead be a Mad Libs style fill in the blank objective. So you'll see this context, which will be literally, "The man went to the mask to buy a mask of milk." And your, what's your training objective is to say, try and predict what this word is, which you can do with a cross entropy loss to the extent that you don't guess store. And then, it will be trying to guess what this word is and you want to let guess gallon. So you're training a model, um, to fill in these blanks. Um, and the rate at which they blank words is essentially one word in seven, and they discuss how this is a trade-off. Because if you blank too few words, it gets very expensive to train. And if you blank many words, well you've blanked out most of the context of a word, and that means it's not very useful for training, and they found about sort of one in seven seemed to work pretty well for them. But what they want to argue is, um, that for the OpenAI's GPT, which is also a transformer model. It's a sort of a classic language model working from left to right and so you only get left context. Um, for the BERT language model, sorry, the ELMo language model that's shown up at the top. Um, well, they're running a left to right language model and they're running, um, right to left language models. So in some sense, um, they have context from both sides. But these two language models are trained completely independently and then you're just sort of concatenating their representations, um, together. So there's no sense in which we're actually kind of having a model that's jointly using context from both sides at the time though that the pre-trained, um, contextual word representations are built. So their hope is using inside a transformer model this trick of blanking out words, and predicting it using the entire context will allow them to use two-sided context, and be much more effective. And that's what they seem to show, um. There's one other complication and, I mean, I'll show later. Um, this last complication is a bit useful, but it's sort of not really essential to their main idea, was that they thought, one of the, one of the goals in their head was clearly to be able to have this be useful for things like question answering, um, tasks, or, um, natural language inference tasks, and their relationships between, um, two sentences. So, their idea was, well, one good objective is this fill in the blank word objective which is, sort of, like language modeling objective. But they thought it would be useful to have a second objective where you're predicting relationships between sentences. So, they secondly have a loss function which is, um, let's have two sentences where the sentences might be two successive sentences in the text, or a sentence followed by a random sentence from somewhere else. And we want to train the system to predict when you've, seeing an- a correct next sentence versus a random sentence. And so you're also training a loss based on this next sentence prediction task. And so it'll be something like: The man went to the store. He bought a gallon of milk. You're meant to predict true is the next sentence, um: The man went to the store. Penguins are flightless. You're meant to say false. This isn't the next sentence. And so they're simultaneously also, um, training with this representation. So, what they end up looks, looks like this. Um, so, they have, um, for the input, they'll have a pair of sentences. My dog is cute. Um, separator. He likes playing. Um, the words are represented as word pieces like we talked about last week. Um, so there's a token embedding for each word piece. Um, then there's a positional embedding for each word piece which is gonna be summed with the token embedding. And then finally, there's a segment embedding for each word piece which is simply whether it comes from the first sentence or the second sentence before or after the separator. So, you're summing those three things together to get the token representations. And then you're going to use those in a transformer model where you will have losses to the extent that you can't predict the masked words. And then your binary prediction function as to whether there's a correct next sentence or not which is the training architecture. Okay. So, it's a transformer as before, it's trained on Wikipedia plus the BookCorpus. And they built two models. Um, the Base-BERT model was a twelve layer transformer. And so this corresponded to what the previous transformer paper had used, right? Those two layer transformer blocks repeated six times gave you 12 layers with 768 hidden, um, dimension hidden states and 12 heads for the multi-head attention. And then they went bigger, um, and trained BERT-Large which is, sort of, double the number of layers, bigger hidden states, even more attention heads. Um, and training these on, um, pods of TPUs. Um, so, first of all, you're training, um, on this basis for masked words and, um, next sentence or not. Um, so then what they wanted to say was this pre-trained model, um, evaluated on these losses and masked language model and next sentence prediction. Um, we could then take this model, fr- freeze most of its what weak. No, sorry, that's wrong. We could take this model, um, pre-trained and it would be incredibly useful for various different tasks. We could use it for named entity recognition, question answering, natural language inference et cetera. And the way we're going to do it, is kind of, doing the same thing as the ULMFit model did. We're not just going to say here's our, here's a contextual word representation like ELMo did. Instead, what we're gonna say is just keep on using this, keep on using this um, transformer network that we trained as a, sort of, language model, but fine tune it for a particular task. So, you're now going to run this transformer calculating representations for a particular task. And what we're going to change is we're going to remove the very top-level prediction. The bits that predict the mass language model and next sentence prediction. And we're going to substitute on it, on top, um, a final prediction layer that's appropriate for the task. So, if our task is SQuAD question answering, our final prediction layer will be predicting start of span and end of span, kind of, like when we saw DrQA a couple of weeks ago. If what we're doing is the NER task, our final prediction layer will be predicting the net- named entity recognition class of each token just like a standard NER system. Okay, um, and so they built this system and tested it on a whole bunch of data sets. Um, one of the main things they tested on was this GLUE data set which has a whole bunch of tasks. A lot of the tasks, they're, uh, natural language inference tasks. And I've kept saying that phrase all of this lecture but I haven't really defined it. So, with a natural language inference you're given two sentences like: Hills and mountains are especially sanctified in Jainism. And then you can write a hypothesis on: Jainism hates nature. And what you're meant to say is, whether the hypothesis, um, follows from the premise, contradicts the premise, or has no relation to the premise. So, that's a three-way classification. And so here it contradicts the premise. Um, there are various other tasks such as this linguistic acceptability task. Um, but if we look at these, um, GLUE tasks. Um, these are showing the Pre-OpenAI State Of The Art. How well, um, ELMo works. How well OpenAI GPT works, and then how well do small and large BERT models work. And effectively, what you're finding is, um, that the OpenAI GPT was so, you know, pretty good. It showed actually good advances on most of these tasks. For many, but not all of them that broke the previous state of the art, showing the power of these contextual language models. But the bidirectional form of BERT's prediction just seemed much better again. So, going from this line to this line you're getting depending on the task about two percent better performance. And so the BERT people actually did their experiments carefully. So, these models are pretty comparable in terms of size, but the bidirectional context seems to really help. And then what they found was, well, by going to just a bigger model, again, you could get another big lift in performance. And so you're getting for many of the tasks about another two percent lift in performance going into the bigger model. So, this really produced super-strong results. And in general, um, people have found, um, that BERT continues to give super strong results. So, if I return back to my ConLL NER task, we had ELMo giving you 92.2, um, and you, sort of, continue to get gains. So, BERT Base gets you to 92.4 and BERT Large takes you to 92.8. Though in, um, truth in, truth in description, there is now a system of beats BERT Large on NER which is actually a character-level, um, transformer language model from Flair. Um, but, you know, this continued over to a lot of other things. So, on SQuAD 1.1, um, BERT immediately just outperformed everything else that people have been working on for SQuAD for ages. In particular, what was especially dramatic, um, was the sing- a single BERT model, um, beat everything else that had been done previously on SQuAD version 1.1, even though they could also show that an ensemble of BERT models could give further good, um, performance gains. Um, and as I've mentioned before, essentially if you look at the SQuAD 2.0, um, leaderboard, all of the top ranked systems, um, are using BERT one place or another. Um, and so that, sort of, led into this, sort of, new world order, um, that, okay, it seems like the state of NLP now is to, if you want to have the best performance, you want to be using these deep pre-trained transformer stacks to get the best performance. And so this is, sort of, making, um, NLP more like vision. Because really vision for five years has had these deep pre-trained neural network stacks, um, like ResNets. Where for most vision tasks what you do is you take a pre-trained ResNet, and then you fine tune a layer at the top to do some classification tasks you're interested in. And this is, sort of, now, um, starting to be what's happening in NLP as well. That you can do the same thing by downloading your pre-trained BERT and fine tuning it to do some particular performance task. Okay, um, that's it for today and more on transformers on Thursday [NOISE].
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_7_Vanishing_Gradients_Fancy_RNNs.txt
Hi, everyone. I'm Abby. If you weren't here last week, I'm the head TA of this course. And this is the second [NOISE] of three lectures that I'm going to be giving on RNNs and related topics. Okay. So, welcome to week four. Today, we're going to be learning about vanishing gradients, and some more complex types of RNNs. So, before we get started, I've got a few announcements. Uh, the first announcement is that assignment four is released today, uh, it's due Thursday of next week, not Tuesday, so that means you have two days more to do it than you did for all the other homeworks. And the reason for that is assignment four is probably more work than the other homework so far, so don't be surprised by that. Uh, assignment four is all about Neural Machine Translation. Uh, we're gonna learn about NMT on Thursday's lecture this week. And, uh, this is really exciting, because actually CS 224 has never had an NMT assignment before, so this is all new this year, and you're gonna be the first year students who are going to be doing an NMT assignment. Uh, something else that's different about assignment four is that you're going to be using Azure, which is, uh, a cloud computing service, in order to train your NMT systems on a virtual machine with a GPU. And, uh, this is necessary in order to be able to do it in a reasonable amount of time. So, I have a warning which is, if you're a person who perhaps doesn't have, ah, learnt- a lot of experience working on remote machines, so for example if you're not very familiar with SSH, or tmux, or remote text editing, then I advise you to budget some extra time for assignment four, because that's probably gonna take you a little while to set up and get used to. So, again, I'm going to emphasize, do get started early on assignment four, because, uh, the NMT system takes about four hours to train on your virtual machine, so you really can't start it the night before and expect to get it in on time. Uh, and assignment four is really quite a lot more complicated than assignment three. So, uh, don't get into a false sense of security if you found assignment three easy. Um, so Thursday's slides on NMT are ready on the website today, so you can even start looking at it today if you want- if you wanna get started on assignment four early. Uh, so, I have a few more announcements, uh, on the subject of projects, uh, next week's lectures are going to be all about projects. So, you're going to hear about, uh, question answering, and the default final projects, and then you're also gonna get some tips about how you might, uh, choose and define your own custom projects. So, it's fine if you're not thinking about a project this week, that's okay. You can delay until next week to start thinking about it for the first time. But if you are a person who is already thinking about your projects, for example, if you're trying to choose your custom projects, uh, then you should check out the website's project page, because it has quite a lot of information about, uh, how to choose your projects, and also some inspiration. And that includes- we've collected some, uh, project ideas from various members of the Stanford AI Lab. So, these are faculty and PhD students and postdocs, who have ideas for, uh, NLP deep learning projects that they would like CS224n students such as yourself to work on. So, especially, if you're looking to maybe get into research later, this is a really great opportunity, uh, to work with someone in the Stanford AI Lab, and maybe get some mentorship as well. Okay. So here's an overview. Uh, last week, we learned about Recurrent Neural Networks, um, we learned about why they're really great for Language Modeling. And today, we're gonna learn about some problems with RNNs, and we're gonna learn about how to fix them. And this is gonna motiva- motivate us to learn about some more complex RNN variants. And then, uh, next lecture on Thursday, we're going to, uh, have some more application-based, uh, contents, so we are going to be learning about Neural Machine Translation, which is a really important task in, uh, NLP and deep learning, and in particular, we're gonna learn about this architecture called sequence-to-sequence with attention. But in more detail, today's lecture, uh, first, we are going to learn about the vanishing gradient problem. And this is gonna motivate us to learn about two new types of RNN called Long Short-Term Memory, and Gated Recurrent Unit. We're also going to learn about some other kind of miscellaneous fixes for the vanishing gradient problem, or the exploding gradient problem. Uh, so in particular, we're going to learn about gradient clipping, which is, uh, fairly simple, but quite important. Uh, we're also going to learn about skip connections, which is a fairly new neural architecture, which tries to, uh, fix the vanishing gradient problem. [NOISE] And then, at the end of the lecture, we're gonna learn about some more fancy RNN variants such as, uh, bidirectional RN- RNNs, those are the ones which go not just left to right, but also right to left, and we're going to learn about multi-layer RNNs. And that's when you stack multiple RNNs on top of each other. So, there's a lot of important definitions today. Um, so, you're gonna find that the information in this lecture is pretty important for assignment four and probably for your project as well. Okay. So, let's get started thinking about the vanishing gradients. Uh, so here we have an RNN, with, let say, ah, four steps, and suppose that we have some kind of loss that's, uh, J4, and that's computed based on the four hidden states. So, let's suppose we're interested in asking what is the derivative of this loss J4, with respect to the hidden states, uh, h1, the first hidden state? So, I'm representing that with this, uh, blue arrow notation to kind of represent how we have to make the gradients flow backwards in order to complete this. So, if we're interested in what this gradient is, we can apply the chain rule and say, "Well, it's the product of the, uh, gradient of the loss with respect to h2, and then gradient of h2, with respect to h1." And then, similarly, we can decompose that again using the chain rule, and we can do it again. So, what we've done here is we've decomposed the gradient that we were interested in, into the products of these various intermediate gradients. And in particular, we're seeing all these ht by ht minus 1, uh, adjacent gradients of the hidden states. So, the thing I want to ask you is, what happens if these gradients are small? Given that there's a lot of them, uh, what happens if they're small in magnitude? So, the overall problem of the vanishing gradient problem, is that when these gradients are small, then our overall gradient is gonna get smaller and smaller, as it back propagates further. Because the accumulated gradient is the product of all of these intermediate gradients. And when you multiply something by something small, then the whole thing gets smaller. So, that's what I'm representing here with these, uh, smaller and smaller blue arrows going backwards. So, that's the general idea of the vanishing gradient problem. Here's a slightly more formal definition. So, if you remember from last time, uh, if we have a null RNN, then the hidden state ht is computed as a function of the previous hidden state ht minus 1, and the current input xt. Uh, so you might remember in the previous lecture we said that xt were one-hot vectors representing words, and then ET is the embedding. Uh, this lecture we're going to be, uh, getting rid of that detail, and we're just gonna be thinking very abstractly about an RNN that has some kind of input xt, and xt is just any kind of vector. Probably a dense vector, but you know, it could be words or not. It could be one-hot or dense. Uh, but that's just the input. So, that's the, uh, the definition that we learned last time for Vanilla RNNs. So, this means that the derivative of ht, hidden state on step t with respect to the previous hidden state, uh, is this expression here. Uh, so this is just an application of the chain rule, and, uh, if you looked long enough or refer back to the backprop lecture you'll see, uh, that that make sense. So, in particular, we're, um, multiplying by Wh at the end, uh, because we have the multiplication of Wh and ht minus 1 on the inside. Okay. So, if you remember, on the previous slide, we were thinking about what's the gradient of the loss on some step, step i I'd say, with respect to a hidden state hj, on some previous step j. And maybe J is quite a few steps before i. So, we can now write this, uh, in the following way. So just by applying the chain rule, now on the first line we're saying that this derivative that we're interested in can be decomposed into the derivative with respect to step i, which is kind of the last step, and then do all of those intermediate gradients of the adjacent hidden states as well. So, that- that first slide is just exactly the same thing as we were looking at on the, uh, the picture, uh, the diagram on the previous slide. Okay. And then, given that we figured out what is, uh, dht by dht minus one, ah, further on the slide, then we can just substitute that in. So, what we're finding is that this overall gradient that we're interested in, in particular, has this term, uh, Wh, the weight matrix, and it's, uh, multiplied by itself, i minus j times, because there's i minus j many steps between, uh, step j and step i, which is the- the distance that we're traveling with this gradient. So, the big problem here is, if this weight matrix Wh is small, then this term is gonna get vanishingly small, exponentially small, as i and j get further apart. So, to give this a little more detail, uh, we can think about the, uh, L2 matrix norms of all of these matrices, right? And, uh, as a- as a- uh, as a- sorry. I'm- it's a known fact of, uh, L2 norms that you have this, um, inequality that's the, uh, norm of the products of some matrices is less and equal to the product of the norms of the matrices. So, in particular, we're seeing that the norm of this gradient that we're interested in, is less than or equal to, uh, the product i minus j many times of the norm of the weight matrix Wh. So, this is what we mean when we say we're concerned about Wh being small, because if it's small, then the thing on the left has to be exponentially small. So in particular in this, uh, paper that, uh, you can take a look at the bottom if you're interested, um, uh, Pascanu et al showed that if the largest eigenvalue of the weight matrix Wh is less than one, then this gradient on the left is going to shrink exponentially. And you can probably see intuitively why this is true. So if, you know, as a simplifying assumption, we suppose that Wh was not a matrix, but simply a scalar that was just a single number, then you can see why if that number was greater than one, then the whole thing is gonna explode. And if that number is less than one, then it is going to shrink exponentially as you multiply by the same number again and again. Uh, so you can check out the paper for more details, but here, uh, the bound is one, partially because we have the sigmoid nonlinearity. And that's, uh, based on the bounds of what we know as the, uh, norm of the sigmoid function to be. So, uh, this shows you why if the, uh, Wh matrix is small, or if its largest eigenvalue was small, then we're going to have vanishing gradients. And similarly, if you check out the paper, you can see that there's a similar proof, uh, relating if the largest eigenvalue is greater than one, to having exploding gradients. So that's when the gradients get bigger and bigger, as you backprop further. Okay. So hopefully I've convinced you that vanishing gradients is a phenomenon that happens in our norms. But I haven't yet said why this is a problem. So, why should we view this as a bad thing, if the gradients are getting larger and larger, or smaller and smaller as you backprop? So here's, uh, here's a picture that might illustrate why it's a bad thing. So, uh, as before, suppose that we're thinking about, what's the derivative of the loss on the fourth step with respect to the first hidden state? And we have this situation where the gradient is getting smaller and smaller as it goes backwards. But then, think about what is the gradient of let's say the loss in the second step also with respect to the first hidden state. So I'm representing that with the orange arrows. And what my point is here, is that the magnitude of the gradient signal from close by, is a lot bigger than the magnitude of the gradient signal from far away. And this means that when you update your model weights, the signal that you're getting from close by is gonna be so much bigger than the signal from far away, that essentially you're only going to learn, you're only going to optimize with respect to these nearby effects and not the long-term effects. So you're gonna, you're gonna lose the long-term effects, er, inside the, the nearby effects. Any questions about this, yeah? So, uh, where they say there that you do actual updates. You know, there are actually some that are multiple chains, not just one chain. So the nearer term should cover it. Sorry, what's the last part? The nearer term should have a larger effect considering you're updating the sum of the weights over different chains. Okay. So I think, ah, the observation was that, given that, for example, in Language Modeling you might be summing over multiple losses. There is a loss in every step and you sum all of them and that's your overall loss. Then you do want to update more with respect to the nearby losses than the far losses. So I think, uh, yeah, so if the design of your objective function is that it's the sum of the loss in every step, then you do want to, uh, weight all of them equally. I think, uh, my point was more about, what is the influence of, uh, the action of the weight matrix at this early stage. What is its influence on a loss that's nearby? And what is its influence on a loss that's far away? Um, and due to, uh, the dynamics of how the vanishing gradient, uh, problem works, then, uh, the influence on the loss that's far away is gonna be much less than the influence nearby. And I'm gonna give some more linguistics examples later of why you might want to learn, uh, the connections that are farther away. So essentially the problem is, in situations where you do want to learn the connection between something that happens early and something that happens later, then you're going to be unable to learn that connection. Uh, so we'll see some motivating examples in a minute. Any other questions on this? Yeah? Um, I'm getting confused like, why are you talking about like dh, dj dh. Uh, it's like H parameter, like, are we going- Yeah. from- Okay. That's a great question. So you're asking why are we interested in some kind of dj by dh given that we're not updating H. H is an activation not a weight. Um, so the reason why we're thinking about that, is because when you think about what is dj by dw, which is a thing that we're going to update. That's always gonna be in terms of dj by dh at some point, right? So if we're thinking about W, you know, and how it acts on, uh, the transmission from h_1 to h_2, then dj4 by W in that position is going to have to go through dj4 by dh_2. So if we're getting vanishing gradients, uh, as we back propagate further, then it's kind of like a bottleneck. Then you're certainly going to have vanishing gradients as they affect, uh, the recurrence matrix there, and indeed the matrix that's applied to the inputs. Okay. I'm gonna move off now. Uh, so another way to explain why vanishing gradients is a problem, is you can think of it as, uh, a gradient. You can think of it as a measure of the effect of the past on the future. So we've already talked about this little bit. Uh, gradient is like saying, if I change, uh, this weight or this activation a little bit, then how much and how does it affect this thing in the future. So in particular, if our gradient is becoming vanishingly small over longer distances, let say from step T, step T to step T plus N, then we can't tell whether in one of two situations. So the first situation is maybe there's no dependency between step T and step T plus N in the data. So perhaps we're learning on a task where, in the task there truly is no collect, uh, connection or relationship to be learned between what happens on step T and what happens on step T plus N. So there truly is nothing to be learned and it's actually correct that there should be, you know, small gradients with respect to those two things. But the second possibility is that, yes, that is a true connection between those two things in the data and in the task. And really ideally we should be learning that connection. Um, but we have the wrong parameters in our model to capture this thing, and therefore that is why the, the gradients are small. Because the model doesn't see them as connected. So we are not learning the true dependency between these two things. And the problem with the vanishing gradient problem is that it's, we're unable to tell in this situation, which of these two situations we're in. Okay. So this is all pretty theoretical. I think this example should make it a little more, more clear why the vanishing gradient problem is bad. So, uh, last week we learned about RNN-Language Models. And if you remember Language Modeling is a task where you have some kind of text and then you're trying to predict what word should come next. So, uh, here's a piece of text. It says, um, ''When she tried to print her tickets, she found that the printer was out of toner. She went to the stationery store to buy more toner. It was very overpriced. After installing the toner into the printer, she finally printed her,'' and can someone shout out what word you think should come next? Tickets. Tickets. Yes, exactly. So that was easy for you to do because, uh, it makes sense logically that if that was the thing she was trying to do, that's the thing she's gonna do once she's gone the whole detour for the, for the toner. Um, so the question is, can RNN-Language Models easily answer this question. Would they do well at this particular Language Modeling example? So for an RNN-Language Model to do well at this kind of example, then they need to learn from this kind of example in the Training Data. So if it solves the example in the Training Data, then the RNN-Language Model will need to model the dependency. Learn the connection between the appearance of the word tickets early on on the 7th step, and the target word tickets at the end. But if we have the vanishing gradient problem, then these gradients, uh, if they know the step, the, the last step with respect to the early step, it's gonna be very small because it's, it's a fairly long distance, right? And this means that the model is going to be unable to learn this dependency, easily or at all. So if the model can't learn this kind of dependency during training, then the model is going to be unable to predict similar kinds of long distance dependencies at test-time. Okay, here's another example. Um, here's a piece of text. Uh, the text says and this isn't a full sentence. This is just a partial sentence. It says, the writer of the books, blank. And I'm gonna give you two options. It's either, the writer of the books is or the writer of the books are. So, uh, again shout out which one do you think it is, is or are? Is. Is, that's right. So, uh, the correct answer, a correct possible continuation of the sentence would be, uh, the writer of the books is planning a sequel. I can't think of a continuation that goes the writer of the books are, that would be, uh, grammatically correct. So the reason why I'm bringing up this example, is because this shows a kind of tension between, uh, two things called, uh, syntactic recency and sem- uh, sequential recency. So syntactic recency is the idea that in order to correctly predict the next word should be more is than are, is that the word writer is the kind of syntactically close word here. So we say the writer of the books is because it's the writer is. So you can see this as the word writer and is, are, uh, syntactically close. Because if you looked at the dependency paths for example, then there would be a short path in that tree. So by contrast, se- sequential recency is the, uh, simpler concepts of how close words are just in the sentence as a sequence of words. So in this example, books and are, are very sequentially recent because they're right next to each other. So the reason I'm bringing this up is because, the second one would be incorrect but it's kind of a tempting option. Because if you're mostly only paying attention to things that happened recently, um, then you might get distracted and think, "Oh, the books are, that sounds right." So the problem here is that RNN-Language Models are better at learning from sequential recency than sicta- syntactic recency. And this is partially due, due to the vanishing gradient problem. Because especially perhaps, if your syntactically, uh, related word is actually kind of far away, then it might get really hard to use the information from the syntactically recent word, especially if there's a lot of strong signal from the sequentially recent word. So, uh, there are some papers that show that RNN-Language Models make this kind of error, of saying are, rather than is. Uh, they make this kind of error more often than you would like, uh, especially if you have multiple of these distracting words such as books, uh, in between, uh, the word you're trying to predict and the true word that you should be, uh, referring to. Okay, any questions on this? All right, moving on. So, we briefly mentioned that exploding gradients, uh, is a problem. So, I'm briefly going to justify why is exploding gradients a problem, and why does it, uh, what does it look like? [NOISE] So, the reason why exploding gradients are a problem, is if you remember this is how SGD works. Uh, we say that the new parameters of the model, which we represent by Theta, is equal to the old premises, and then you take some step in the direction of negative gradients because you're trying to minimize the loss of J. So, the problem is if your gradient gets really big, uh, then your SGD update step is going to become really big too. So, you're going to be taking a very big step, and you're going to be drastically changing your model parameters, Theta. And this means that you can end up with some bad updates. We end up taking too large a step. And we're changing the parameters too much. And this means that, uh, we kind of take a big step, and we end up in some, uh, area where the parameters are actually very bad. Uh, with example the- for example, they might have a much larger loss than they had before. So, in the worst case, this can often manifest as seeing, uh, infinities or NaNs, not a number in your network when you're training it in practice. [NOISE] So, this can happen because if you take such a big step that maybe you update your parameters so much that now they're infinity, or minus infinity, something like that, then you're gonna have all of these infinities within your activations as well, and then all of your losses are going to be infinity, and the whole thing just isn't going to work, at all. So, it's very annoying when this happens, and unfortunately it happens, uh, fairly often. And if it does then you have to essentially restart training from some earlier checkpoint before you got the NaNs and the infinities because there's no kind of salvaging it from its new state. [NOISE] So, what's the solution to this exploding gradient problem? [NOISE] Uh, the solution is actually pretty simple and it's this technique called gradient clipping. So, the main idea of gradient clipping, [NOISE] is that if the norm of your gradient is greater than some threshold and the threshold is a hyperparameter that you choose. uh, then you want to scale down that gradient, um, before you apply the SGD update. So, the intuition is yo- you're still gonna take a step in the same direction. But you're gonna make sure that it's a smaller step. [NOISE] So, here, um, I've got a screenshot of some pseudocode from, uh, the related paper that, uh, proposed gradient clipping, or at least some version of gradient clipping. [NOISE] And, um, it's pretty simple as you can see. Uh, g hat is the vector which is the, uh, derivative of the error with respect to the premises, and it's saying that if the norm of this gradient is greater than the threshold's, then you just scale it down. But the important thing to note is that it's still pointing in the same direction, it's just a smaller step. So, here's a picture to show how that might work out in practice. And, uh, this is a diagram from the, uh, deep learning textbook which is also linked on [NOISE] the website. So, what's going on here, is that, uh, the picture here is the loss surface of a simple RNN. So, they made a very simple RNN that instead of having, uh, a sequence of vectors as the hidden states, it just suppose that each hidden state is simply just a single scalar. So, this means that instead of having a weight matrix, w, and the bias vector, b, you have a scalar w and a scalar b. So, that's why in the picture, you just have this like two-dimensional parameter space. And then the, the z-axis is your, is your loss. So here, high loss is, is bad and low loss is good in what you're trying to get. So, uh, here in this picture, you've got this kind of cliff, right, where you have this very steep cliff face, uh, where the loss changes very quickly. [NOISE] And this cliff is really dangerous because it has steep, steep gradients. And you might be in danger of taking a really big, [NOISE] uh, update step because you're on the area with a really steep gradient. [NOISE] So, on the left, you've got a possible scenario of what might happen if you don't have gradient clipping. [NOISE] So, on the left, uh, you can see that you start kind of at the bottom of the cliff, and you have a f- a si- a few small updates. And then, in particular makes a bad update because you see there's a small kind of dip before it goes off the cliff. So, th- the true local minimum, the optimal you're trying to get to is that the bottom of that small kind of ditch. And, um, it starts off kind of near the edge of that ditch, and then there's a negative gradient going into it. But unfortunately, the, the update kind of overshoots, and it ends up going a long way off the cliff. So now, it's in this bad situation where it's taken a bad update, and now it's got a much bigger loss than it had [NOISE] before. So now that it's on the cliff. Again it, it measures the gradient, and the gradient is very steep, right? The gradient is very large. So, when it takes a, uh, update with respect to that gradient, then because the gradient is so big, it takes a really huge step. And that's, um, the, the one to the right. You can see the step going to the right. So, that's also a very bad update because it's just throwing it really far to some probably fairly random, uh, configuration of w and b. So, on the left, you can see what can go wrong if you're taking these really big steps because you were in areas with a very steep gradient. So, by contrast on the right, you can see what might happen if you do have a gradient clipping. [NOISE] [NOISE] And, um, it's much less drastic, right? You've got a similar kind of pattern where it takes a few steps into the ditch, and then ends up going off the cliff a little bit, but not too much because the gradient was clipped. And then, it's on the cliff and there's again a really steep gradient, but it doesn't take such a big step because again the gradient was clipped, so that it kind of comes back down. So, you can see that plausibly by using this gradient clipping method, you've got a, a kind of safer update rule, where you're not gonna take any, any big crazy steps and you're more likely to kind of find the, the true minimum which is at the bottom of the ditch. [NOISE] I think there was a question earlier. Was there a question over here? [NOISE] I just want to see the value. [NOISE] [NOISE] Okay. Anyone else? [NOISE] Yeah? [NOISE] [inaudible] So, the question is, in assignment three, y- you saw the atom optimization algorithm which, uh, has this thing called momentum, which essentially says that kind of like physical momentum in, in the real world, that if you've been traveling in the same direction for a while, then you can take bigger steps, I think, and if you've recently kind of changed direction, then you should take smaller steps. And I think there's another element as well, where you divide by some factor. [NOISE] So, it is a similar kind of idea. I suppose it's a different criterion, right? So, what they both have in common is it's a kind of criterion for when to scale up or scale down the size of your update step. Um, and I think they're based on different notions of when should you take bigger steps and when should you take smaller steps. When should you be cautious or less cautious? So, I guess here the criterion is different. It's kind of a simple criterion saying, like if it's really steep, then be careful. Yeah. Another question? Uh, so the [inaudible]. [NOISE] Okay. So the question is, is this similar to regularization of some kind, right? So, I suppose, yeah, there is- there are some things in common. Say for, example, L2 regularization says that you want, for example, your weight matrices to have a small L2 norm, right? And the idea is that you're trying to prevent your model from over-fitting the data by, um, having some kind of constraint that says you have to keep your weights fairly simple, that is keep them, you know, small. So, I suppose the relationship is that here we're saying that we don't want the norm of the gradients to be too big. Ah, I don't know if this is related to overfitting. Um, I guess I have to think more carefully about that, but I guess it's a similar kind of constraint that you're placing. Okay. I'm gonna move on for now. Uh, so we've talked about how you might fix the exploding gradient problem with gradient clipping, but we haven't talked about how we might fix the vanishing gradient problem. So, um, to recap, I think one way to characterize the problem with the- the vanishing gradients in RNNs is that it's too difficult for the RNN to learn to preserve information over many timesteps. So, in our example with printing the tickets and re- remembering that it's the tickets that she wants to print, you could think of it as it's hard for the RNN language model to correctly predict tickets because in a way, it's too hard for the RNN language model to, uh, learn to retain the tickets information and use it later. So, um, if you look at the equation for vanilla RNNs and how we compute the hidden state, uh, based on the previous hidden state and- and the inputs, you can see that the hidden state is in a way constantly being rewritten. It's always computed based on these, uh, linear transformations and the, you know, the non-linearity. So, it's not all that easy to preserve the information from one hidden state to the other, in particular, because we are putting it through this non-linearity function. So, this motivates us to ask what about an RNN with some kind of separate memory? If we have some kind of separate place to store information that we want to use later, then would this make it easier for our RNN to learn to preserve information over many timesteps? So, this is the motivating idea behind LSTMs or Long Short-Term Memory RNNs. So, the idea here is that an LSTM is a type of RNN and it was proposed back in, uh, 1997. And the idea is that this is, uh, this was proposed as an explicit solution to the vanishing gradients problem. [NOISE] So, one of the main differences here is that on each step T instead of just having a hidden state h_t, we have both the hidden state h_t and the cell state which we denote c_t. And both of these are vectors of some same length, n, and the idea there is that the cell is meant to sto- store our long-term information that, that's on memory units. Another super important thing is that the LSTM can erase and write [NOISE] and read information from the cell. So, you kind of think of this a bit like memory in a computer, in that you can do these operations, reading and writing and erasing, um, and that's how you're gonna keep your information. [NOISE]. Another super important thing is that the way the LSTM decides, whether it wants to erase, write, read, information and decide how much and which information, uh, that's all controlled by these [NOISE] gates. So, the idea is [NOISE] that the gates are themselves also vectors of length n, and the idea there is that on each timestep, each element of these gates which are vectors are somewhere between zero and one. So here, uh, one represents an open gate and zero represents a closed gate, and you can have values anywhere in between. So, the overall idea, which we're gonna firm up on the next slide, but the overall idea is that if the gate is open, that represents some kind of information being passed through, and if the gate is closed, it [NOISE] means that information does not pass through. Okay. So, the last really important thing is that the gates are dynamic. They're not just set at some constant value for the whole sequence. [NOISE] Um, they're dynamic, which means that they're different on each timestep T, and the value that is the decision of whether they're open or closed and in which ways, [NOISE] um, that is computed based on the current context. Okay. So here's, um, here's the- the equations for the LSTM which might make it clearer. So, uh, suppose we have some sequence of i- inputs x_t and we want to compute a sequence of hidden state h_t and cell states c_t. So, this is what happens on timestep t. Uh, this process equation shows you the three gates that I talked about before. So, the first one is called the Forget Gates. And the idea is that this one is controlling what is kept versus what is forgotten, um, from the previous cell state, the previous memory. And you can see that this forget gate is computed based on, uh, the previous hidden state h_t minus one and the current input x_t. Um, so that's what I meant when I said that it's dynamic and it's computed based on the- the current context. [NOISE] Um, you can also see that it's computed using, uh, the sigmoid function, which means that it is somewhere between zero and one. Okay. The next gate is called the input gate, and this one controls what parts of the new cell contents are written to the cell. So, the idea there is that you have this- this memory cell and this is kind of, um, controlling like ho- how and what you get to write to the memory cell. Okay. And the last one is called the upper gate. So, this one is controlling, uh, what parts of the cell are outputs to the hidden state, [NOISE] so you could view this as kind of like the read function, right? We're going to read some information from our memory cell and that's gonna get put into our hidden states, and this gate is gonna control that. [NOISE] Okay. [NOISE] Uh, yeah, that's just the sigmoid function as we noted before. All right. So, the next set of equation shows how we use these gates. [NOISE] So, the first line, uh, you could regard this, uh, c_tilde as the new [NOISE] cell content. So, uh, this is the new content that you want to write to the cell, [NOISE] and this is also computed based on, uh, your previous hidden state and your current inputs, and this goes through your tan h non-linearity. So, uh, this is kind of the- the main contents that you are computing based on the context and you want to write this into memory. So, on the next line what's happening is that we're going to use the forget gate to selectively forget some of the information from the previous, [NOISE] uh, memory cell. And you can see that we're doing these element-wise products, that's what the little circle is. So, the idea is that if you remember that f_t is a vector full of values between zero and one, when you do an element-wise product between f_t and the previous cell state c_t minus one, then what you're essentially doing is you're kind of masking out some of the information from the previous hidden state. Sorry, no. Previous cell state. So, when f is one, then you're copying over the information, but when f is zero, then you're getting rid of that information, you are erasing it or forgetting it. Okay. And then the other half of this equation, um, i_t times c tilde t, uh, that's the input gate controlling which parts of the new cell contents are gonna get written, written to the, to the cell. Okay. And then the last thing we do is we, uh, pass the cell through a tan h, that's just adding another non-linearity, and then you pass that through the output gates and that gives you [NOISE] the hidden state. So, in LSTMs, we often think of the hidden states as being, uh, like the outputs of the RNN. And the reason for this is that you kind of view the cell states as being this kind of internal memory that's not generally accessible to the outside, but the hidden states are the parts that you're gonna pa- pass on to the next part of the model. So, that's why we view it as kind of like the output of the model. [NOISE] Uh, and this is, yeah, x just to remind the- there is- circles are element-wise products and that's how we apply the gates. Uh, did anyone have any questions about this? [NOISE]. Okay. [NOISE] Um, so as a reminder, all of these are vectors of some same length n. [NOISE] Okay. So, some people learn better from diagrams than equations, and here's a diagram presentation of the same idea. So, this is a really nice diagram from a blog post, uh, by Chris Olah about LSTMs, and that was a good place to start if you want to get an intuitive understanding of what LSTMs are. So, in this diagram, uh, the green boxes represent timesteps, um, and let's zoom in on the middle one and see what's happening here. So, within one timestep, you can see that this diagram is showing exactly the same thing as those six equations showed on the previous slide. So, uh, the first thing we do is we use the, uh, the current input x_t, which is at the bottom and the previous hidden state h_t minus the one on the left, and we can use that to compute the forget gate. [NOISE] And you can see f_t is on that arrow there. And then you apply the forget gate to the previous, uh, cell, and that's the same thing as forgetting some of the- the cell content from last time. [NOISE] Okay. And then after that, you can compute the input gate, uh, and that's computed in much the same way as the forget gate. And then you use the input gate to decide which parts of this, uh, new cell content get written to the cell, and that gives you the cell c_t. So, here you can see that you computed the impu ga- input gates and the new content and then you use that to gate that and write it to the cell. So, now we've got our new cell c_t, and then the last things we need to do is to compute our new output gate, that's o_t. And then lastly, use the output gate to select which parts of the cell contents you're gonna read and put in the new hidden state h_t. So, that's, that's, uh, that's the same thing as the equations we saw on the previous slide. Okay. So, that's LSTMs. Um, is there a question? What's the importance [NOISE] [inaudible] The question is, why are we applying a tan h on the very last equation on this, on this slide? Why we're planning a tan h to the cell before applying the output gate? Let's see. Um. Yeah. So, your question is, the- the cell, the new cell content already went through a tan h. Um, I'm not sure. So, I suppose a- a- a general answer is that it must be giving some kind of more expressivity in some way, and that it's not just applying tan h's sequentially because you do have the gates in between. Um, so I suppose there must be a reason, kind of similarly to when you apply- apply a linear layer you won't have a non-linearity before the next linear layer. I suppose maybe we're viewing these cases as a kind of linear layer? I'm not sure. I'll look it up. [NOISE] Okay. So, uh, that's LSTMs. And, um, re- if you recall, we were- oh, question? Yeah. Why is it that in the forget gate, you don't look at the previous cell state but you just look at the new hidden state? Like it seems like if you're this- instead of deciding what to forget from the cell state, you should look at it. So the question is, why is the forget gate computed only for the previous hidden state and the current input, why is it not computed based on ct minus one itself, right? Because surely you want to look at the thing to figure out whether you want to forget it or not? Um, that's a pretty good question. Uh, so, I suppose one reason why you might think that this- this works fine is that the LSTM might be learning a general algorithm for where it stores different types of information in the cell, right? So, maybe it's learning that in this particular position in the cell, I learn information about this particular semantic thing and then in this situation, I want to use that or not use that, forget it or keep it. But, yeah, I haven't entirely convinced myself why you don't want to look at the contents of the cell itself in order to decide. I suppose another thing to notice is that ht minus one was read from ct minus one. So, I suppose there is some information there but not necessarily all of the information. Ah, yeah. I'm not sure, that's another thing I need to look up I guess. [NOISE] Any other questions? Okay. Ah, so, that's LSTMs and, um, LSTMs were introduced to try to solve the vanishing gradient problem. So, the question is, ah, how exactly is this architecture making the vanishing gradient problem any better? So, you could, ah, see that the LSTM architecture actually makes it easier for RNNs to preserve information over many time steps. So, while it w as kind of difficult for the vanilla RNN to preserve the information over all of the hidden states, there's actually a fairly easy strategy that makes it simple for the LSTM to preserve the information. So, namely, if the forget gate is set to remember everything on every step, um, that's a fairly simple strategy that will ensure that the information in the cell is going to be preserved indefinitely over many time steps. So, I don't know if that's actually a good strategy for whatever task you're trying to do, but my point is that there is at least, um, a fairly straightforward way for the LSTM to keep the information over many steps. And as we noted that's relatively harder for the vanilla RNN to do. So, you can think of this as the key reason why LSTMs are more able, ah, to preserve the information and thus are more robust to the vanishing gradient problem. Ah, however, I think you should still know that LSTMs don't necessarily guarantee that we don't have a vanishing or exploding gradient problem. You could still have that problem, but the thing to remember is that it's easier to avoid it anyway. Okay. So, um, LSTMs, ah, have been shown to be more robust to the vanishing gradient problem, ah but I'm going to tell you a little about how they've actually been more successful in real life. You have a question? Yeah, [inaudible] Okay. So it's a great question. The question is, why is it that just because you have these LSTM defined forward equations, why do you not have the vanishing gradient problem? Why does the- the logic about, ah, the chain rule kind of getting smaller and smaller or bigger and bigger not apply? So, I think the key here is that, um, in the vanilla RNN, the hidden states are kind of like a bottleneck, right? Like all gradients must pass through them. So, if that gradient is small then, all downstream gradients will be small, whereas here you could regard the cell as being kind of like a shortcut connection at least in the case where the forget gate is set to remember things, um, then that's kind of like a shortcut connection where the cell will stay the same if you have the forget gate set to remember things. So, if the cell is staying mostly the same, then you are not going to be, ah, having the vanishing gradient via the cell. So, that means that to get a connection from the gradient of something in the future with respect to something in the past, there is a potential route for the gradient to go via the cell that doesn't necessarily vanish. So in that, I have one more question. Um-uh. Since we have a shortcut [inaudible] So I think the question was how do you check that your gradients are correct given that there are now multiple routes for information to travel? Right. So, I suppose this somewhat relates to what we talked about last time with the multivariable chain rule about what is the derivative of the loss with respect to a repeated weight matrix and we saw that, if there are multiple routes then the multivariable chain rule says that you add up the gradients. So, if your question is how do you do the calculus correctly and make sure it's correct, I guess you just kind of apply the multi-variable chain rule and it's more complicated than assessing with the LSTMs. Ah if you're using PyTorch 14 you do not have to do that yourself, if you're going to implement it yourself then, you might have a more difficult time. Um, yeah. So, I guess, yeah. Okay. All right, so, what do we get to. All right. So, let's talk about LSTMs and how they work in the- in the real world. So, in the pretty recent past, 2013-2015 um LSTM started achieving a lot of state of the art results on a variety of different tasks including for example, handwriting recognition, speech recognition, machine translation, parsing, image captioning. So, over this period, LSTMs became the dominant approach in a lot of these application areas because they worked convincingly a lot better than vanilla RNNs. However, today in 2019, things changed pretty fast in deep learning. So, other approaches for example, transformers which you're going to learn about later in the class. Ah, in some of these application areas, they seem to have become, ah, the dominant approach. So, to look into this, I had a look at WMT which is a machine translation conference and also competition where people submit their MT systems to be evaluated. And I looked at the report, the summary report for WMT 2016 and in this report, I did a quick Ctrl+F, and I found the word RNN appeared 44 times. So, it seems that most people entering this competition were building their MT systems based on RNNs and in particular LSTMs. And then I looked at the report from 2018, just two years later and I found that the RNN, the word RNN only appeared nine times and the word transformer appeared 63 times, and in fact the organizers noted that everyone, well, most people seem to using transformers now. So um, this shows that things change pretty fast in deep learning. The thing that was hot and new just a few years ago um, is- is now being passed by perhaps by other kinds of approaches. So, you're going to learn more about transformers later but I guess that gives you a kind of idea of where LSTMs are currently in applications. Okay. So, the second kind of RNN we're going to learn about is gated recurrent units. So, these fortunately are simpler than LSTMs, in fact that was the motivation for them being proposed. They were proposed in 2014 as a way to try to retain the strengths of LSTMs by getting rid of any unnecessary complexities. So, in a GRU, we don't have a cell state. We again just have a hidden state. But the thing it has in ah in common with LSTMs is that we're going to be using gates to control the flow of information. So, here are the equations for GRU. We start off with two gates. So the first gate is called the update gate and this controls what parts of the hidden states are going to be updated versus preserved. So, you can kind of view this as playing the role of both the forget gate and the input gate in the LSTM and it's computed in much the same way as the gates in the LSTM were. The second gate is called the reset gate rt, and this gate is controlling which parts of the previous hidden state are going to be used to compute new contents. So, you can think of the- the reset gate as kind of selecting which parts of the previous hidden states are useful versus not useful. So, it's going to discard some things and select some other things. Okay. So, here's how those gates get used. Um, h tilde here. This is you can think of it as the new hidden state contents and what's going on in that equation is that we are applying the reset gate to the previous hidden state ht minus one um and then putting all of that through some linear transformations and a tan H and then this gives us the new content which we want to write to the hidden cell. And then lastly our new hidden cell is going to be a combination of ah this new content and the previous hidden state. So, the important thing to notice here is that we have this one minus u and u term. So um, it's kind of like a balance right? U is ah is setting the balance between preserving things from the previous hidden state versus writing new stuff. So, whereas in the LSTM, those were two completely separate gates that could be whatever value. Here we have this constraint that U is being uh, balanced. So, if you have more of one, you have to have less of the other. So, this is one way in which the creators of the GRU sought to make LSTMs more simple. Was by having a single gate play both of these roles. Okay. So, that's GRUs and I think it's a little less obvious just looking at it. Why GRUs help the vanishing gradients problem because there is no explicit ah memory cell, like there is in LSTMs. So, I think the way to look at this here is um GRUs, you can view this as also being a solution to the vanishing gradient problem because like LSTMs, GRUs make it easier to retain information ah long-term. So, for example here, if the update gate ut is set to zero, then we're going to be ah keeping the hidden state the same on every step. And again that's maybe not a good idea but at least that is a strategy you can easily do in order to retain information over long distances. So that's kind of like- like the same explanation of how GRUs make it potentially easier for RNNs to retain information long-term. Okay. So, we've learned about these two different types of RNNs. Yes. [inaudible] I think the question was, if we view the two gates in the GRU, as being, uh, a precise, um, analogy to the gates in the LSTM or are they more of a fuzzy analogy. I'd say probably more of a fuzzy analogy because there are other changes going on in here, like, for example, the fact that there's no separate, um, memory cell, it means they're not performing exactly the same functions. Yeah. Okay. So, we've learned about LSTMs and GRUs which are both, um, more complicated forms of RNNs, more complicated than Vanilla RNNs. And they are both, uh, more robust to the vanishing gradient problem. So, um, it would be useful to know which of these should we be using in practice? Which one is more successful, the LSTM or GRU? Uh, so, I- I did a little reading and it looks like researchers have proposed a lot of different types of gated RNNs. So, it's not just GRUs and LSTMs, there's many other papers with lots of other different variants. Uh, but these are definitely the two that are most widely used. And, ah, you can probably say that the biggest difference between the two, um, for sure is the fact that GRUs are simpler and quicker to compute and they have fewer parameters. So, this makes an actual practical difference to you as, uh, a deep learning practitioner because if you build your net based on GRUs, then it's gonna be faster to run forwards and, you know, faster to train and so on. So, other than that, there appears to be no very conclusive evidence that one of these LSTM or GRUs, uh, is consistently outperforming the other on lots of different tasks. Uh, it seems that often, uh, sometimes GRUs do perform as well as LSTMs, but there are cases where one of them performs better than the other. So, as a rule of thumb, it seems like LSTM is often a good default choice to start with, uh, especially if your data has particularly long dependencies because there's evidence to think that LSTMs might be slightly better at keeping information over very long distances. And also, if you have a lot of training data, you might think that LSTMs are a better choice because they have more parameters which means that, um, maybe you need more train data to learn them. So, a rule of thumb is that maybe you want to start with LSTMs and if you're happy with their performance and you're happy with how long it takes to train, then you stick with that. But if you feel like you need it to be more efficient, then maybe you should switch to GRUs and see how that goes with the performance and if it's faster. All right. So, um, we've talked so far about how the vanishing/exploding gradients are a problem that occur a lot in RNNs. But, um, the question is, is it only an RNN problem? Does this occur in other kinds of neural networks as well? And the answer is, uh, no, it's not just an RNN problem. In fact, vanishing and exploding gradients are a pretty significant problem for most neural architecture such as feed-forward and convolutional, especially when they're deep. And this is a really serious problem because there's no point having a really cool neural architecture if you can't learn it efficiently because of the, uh, vanishing gradient problem. So, in particular, uh, in these feed-forward and convolutional networks, uh, you often have a gradient becoming vanishingly small over back-propagation, uh, because of the Chain Rule, because of this multiplying by all these different intermediate gradients or sometimes due to your choice of non-linearity function. So, if this happens, this means that your- the lower layers of your, let's say, convolutional or feed-forward network, they have a much smaller, uh, gradient than the high levels. And this means that they get changed very slowly during SGD. So, this means that, overall, your network is very slow to train because when you take updates, then your lower layers are changing very slowly. So, one solution, uh, the kind of like a family of solutions that we've seen in recent years is that there's been lots of proposals for new types of deep feed-forward or convolutional architectures. And what they do is, they add more direct connections in the network. And the- the idea, kind of as we talked about before, is that if you add all of these direct connections between layers, like maybe not just adjacent layers but further apart layers, then it makes it much easier for the gradients to flow, and you're going to find it easier to train your network overall. So, I'm going to show you some examples of these in particular because it's fairly likely you're going to run into these kinds of architectures when you're doing your projects and reading papers. So, one example is something called residual connections or, uh, the network itself is sometimes referred to as ResNet. And here we've got a figure from the related paper. So, what's going on in this diagram is that you have, uh, the usual kind of you've got weight layer and a non-linearity which is ReLU, and another weight layer. So, if you regard that function as being f of x, ah, what they're doing is instead of just, ah, transforming x to f of x, the- they're taking f of x plus x. So they're adding this identity skip connection where the input x is skipped over those two layers and then, um, added to the output of the two layers. So, the reason why this is a good idea, uh, also known as skip connections, is that the identity connection is going to preserve information by default, right? So, if you imagine perhaps if you, um, initialize your network and you initialize your weight layers to have small random values, then if they're small and kind of close to zero, then you're going to have something like a noisy identity function, right? So you're going to be preserving information by default through all of your layers. And if you have a very deep network, that means that even often many, um, many layers, you're still gonna have something like your original input. So, uh, the- the people who wrote this paper, they show that, uh, if you don't have something like skip connections then actually you can find that deep layers- uh, deep networks perform worse on some tasks than shallow networks. Not because they're not expressive enough, but because they're too difficult to learn. So, when you attempt to learn deep networks, it just doesn't learn effectively and you end up getting worse performance in the shallow network. So, the people who wrote this paper, they show that when they add these skip connections, then they made the deep networks, uh, much more effective and they managed to get good performance. Uh, so another example which kinda take this- this idea further is something called dense connections or DenseNet. And again, this was, uh, something proposed I think in a feed-forward or or convolutional setting. And, ah, it's just kind of the same as skip connections but except , um, connects everything to everything. So, add more of these skip connections kind of from all layers to all layers and they showed that this, uh, performs even better. And, uh, the last one I want to talk about which I don't have a picture for is something called highway connections. So, this is similar to the residual or skip connections. Ah, but the idea is that instead of just adding your x, adding your identity, uh, connection, the idea is that you're gonna have a gate that controls the balance between, um, adding the identity and computing, ah, the transformation. So, instead of f of x plus x, you're gonna have, you know, gate times f of x plus, you know, one minus gate times x, something like that. Um, so, this work was actually inspired by LSTMs, but instead of applying it to a recurrent setting, they were seeking to apply it to a feed-forward setting. Okay. I'm gonna keep going for now. Um. So, overall the question was, you know, how much uh, vanishing and exploding gradients a problem outside of the setting of RNNs? And I think uh, the important takeaway is that it is a big problem but you should notice that it is particularly a problem for RNNs. So, um, RNNs are particularly unstable and this is essentially due to the repeated multiplication by the same weight matrix. If you remember from last time, um, the characteristic thing about RNNs that makes them recurrent is the fact that you are applying the same weight matrix over and over again. So, this is actually the core reason why they are so prone to the vanishing and exploding gradients, and ah, you can see some more information about that in the paper. Okay. So, I know there's been a lot of dense information today, a lot of um, lot of notation. So, here's a recap, if I've lost you at any point. Now's a good time to jump back in because it's gonna get a little easier to understand perhaps. So, okay, recap. What have we learned about today? Um, the first thing we learned about was the vanishing gradient problem. We learned uh, what it is. We learned why it happens and we saw why it's bad for RNNs, for example, RNN language models. Ah, and we also learned about LSTMs and GRUs which are more complicated RNNs and they use gates to control the flow of information. And by doing that, they are more resilient to the vanishing gradient problem. Okay. So, if the remainder of this lecture, I think we've got about 20 minutes left, ah, we're going to be learning about two more advanced type of RNNs. So, the first one is bidirectional RNNs and that's all about information flowing left to right and right to left. And then we're also going to learn about multi-layer RNNs which is when you apply multiple RNNs on top of each other. So, I'd say that both of these are pretty simple conceptually. Um, so it shouldn't be too hard to understand. All right, so let's start with bidirectional RNNs. Um, this is a picture which you saw at the end of last lecture. So, if you remember, sentiment classification is the task when you have some kind of input sentence such as the movie was terribly exciting and you want to classify this as a positive or negative sentiment. So, in this example, it should be seen as positive sentiment. So, um, this is an example of how you might try to solve sentiment classification using a fairly simple RNN model. Ah, here we're using the RNN as a kind of encoder of the sentence and the hidden states represent the sentence. And we'll do some kind of combination of the hidden states to compute uh, what we think the sentiment is. So, my question is, if we look at let's say, the hidden state that corresponds to the word terribly and we're regarding this hidden state as a representation of the word terribly in the context of the sentence. So, for this reason we- we sometimes call hidden states in this kind of situation a contextual representation because the idea is that it's a representation of the word terribly in the context of the sentence. So, thing to think about here is that this contextual representation, it only contains information about the left context. So, for terribly, the left context is the words um, the movie was and this hidden state the one that's got a blue box around it has only seen information to the left. It hasn't seen the information of the words exciting or exclamation mark. So, what we're asking is what about the right context? The right context of terribly is- is what exciting and the exclamation mark. And do we think that the right context is useful here? Do we think that this is something we want to know about? And I would argue that in this example, it is actually kind of important because we've got the phrase terribly exciting. And if you look at the word terribly in isolation, terrible or terribly usually means something bad, right? But terribly exciting, you can mean something good because it just means very exciting. So, if you know about the right context, the word exciting then this might quite significantly modify your perception of the meaning of the word terribly in the context of the sentence. And especially given that we're trying to do sentiment classification, this is- this is kind of important. So this motivates why you might want to have information from both the left and the right when you're making your representations. Ah, if when you were a kid, your parents told you to look both ways before you cross the street. You might regard it as the same kind of idea that there's useful information to the left and the right that you'd like to know about ah, before you do anything. Okay. So that's the motivation and um, here is how a bidirectional RNN might work in practice. I have a kind of accidentally festive color scheme here. And so the idea is that you have two RNNs going on. You have the forward RNN as before that encodes the sentence left to right. And then separately, you also have a backwards RNN. And this has completely separate weights to the forward RNN. So, the backward RNN is just doing the same thing except that it's encoding the sequence from right to left. So, each of the hidden states is computed based on the one to the right. And then finally, you just take the hidden states from the two RNNs and then you concatenate them together and you've got your uh, your final kind of representations. So, in particular, if we now think about this contextual representation of the word terribly in the context, um, this- this vector has information from both the left and the right, right? Because you had the forwards and backwards RNNs that respectively had information from both left and right. So the idea is that these concatenated hidden states, those can be regarded as kind of like the outputs of the bidirectional RNN. Like if you're going to use these hidden states for any kind of further computation, then ah, it's these concatenated hidden states that you are going to be passing on to the next part of the network. Um, here- here are the equations that just say the same thing. So, you have your forward RNN and here we've got ah, a notation that you might not have seen before this kind of notation where it says RNN and then in brackets, the previous hidden state and the input that's simply saying that you know, HT is computed from the previous hidden state and the input. And RNN forward could be a vanilla or a GRU or an LSTM. It doesn't really matter, we're looking at it abstractly. So, you have these two separate RNNs, RNN forwards and RNN backwards and generally, these have separate weights. Although I have seen some papers where they have shared weights. So, it seems that sometimes that does work better, perhaps maybe when you have enough training data. And then finally, we regard these concatenated hidden states which you might just notice ht as being like the hidden state of the bidirectional RNN. So, um, the previous diagram is pretty unwieldy. So here's a simplified diagram. And this is probably the only kind of diagram you're going to see from now on to denote bidirectional RNNs. Um, so, what we've done here is you've just made all of the horizontal arrows go left and right ah, to represent that this is a bidirectional RNN. So, the other thing you should assume is that the hidden states depicted here, you know, these red- red trying- red rectangles with the dots. You can assume that those are the concatenated forwards, backwards hidden states from the bidirectional RNN. [inaudible] Okay. So the question is, um, would you train your forwards and backwards RNNs kind of separately, um, on some kind of task and then maybe concatenate them together once they're separately trained networks, or would you train them all together? Um, it seems to me that it's much more common to train them together, but I don- I don't think I've heard of anyone training them separately. Uh, so yeah, it seems like the standard practice is usually to train them together. Does that make sense? [inaudible]. So, let's suppose that we were trying to build a sentiment classification system using the bidirectional RNN. Then what you do, which maybe I should have pictured but I didn't have space, is uh, you would do the same thing that you were doing with the unidirectional RNN, uh, which was, let's say an element y is min or max, um, to get your sentence encoding. Maybe you just do that but over the concatenated, um, n states. Okay. So, an important thing to note is that, uh, when talking about applying bidirectional RNNs, we've assumed that we actually have access to the entire input sequence. So, we assume that we have the full sentence, uh, the movie was very exciting, and, uh, that, that was a necessary assumption in order to be able to run the forwards and the backwards RNN, right? Um, so there are some situations where you can't assume this. Like, for example, in Language Modeling, you only have access to the left context kind of by definition of the task. You only know the words that have come so far. You don't know what's coming next. So, you can't use a bidirectional RNN, uh, to do Language Modeling, uh, in the way that we've depicted here because uh, you don't have the full sequence. However, if you do have access to the entire sequence. Uh, so, for example, if you're doing any kind of encoding similar to the sentiment example, uh, then bidirectionally- bidirectionality is pretty powerful. And you should probably regard it as a good thing to do by default uh, because it turns out that getting this information from both the left and the right, uh, makes it a lot easier to learn these more useful contextual representations. So, in particular, as a preview of something you're going to learn about later in the class, uh, there's a model called BERT, B-E-R-T, and that stands for Bidirectional Encoder Representations from Transformers. And this is a pretty recently. Like, a few months ago, uh, proposed system, and it's this pre-trained contextual representation system. Um, and it's heavily reliant on the idea of bidirectionality. It turns out that the bidirectional, uh, nature of BERT is pretty important to its success. So, you're gonna learn more about that later, but that's just an example of how bidirectionality can give you much more uh, powerful contextual representations. Okay. So the last thing we're going to talk about today is multi-layer RNNs. Uh, so you could regard RNNs as already being deep in some sense because you've already unrolled them over potentially very many timesteps, and you could regard that as a kind of depth, right? But there's another way that RNNs could be deep. So, for example, if you applied multiple RNNs kind of one after another, then this would be a different way to make your RNN deep, and this is the idea between, uh, behind a multi-layer RNN. So, the reason why you would want to do this is because uh, this might allow the network to compute more complex representations. So, this is the logic betwe- behind deep networks in general. So, if you're familiar with the idea of why deeper is better for let's say convolutional networks, then this is kind of the same logic. It's saying that, uh, your lower RNNs might be computing lower-level features like, let's suppose maybe it's keeping track of syntax, and your higher level RNN's gonna compute higher-level features like maybe semantics. And a note on terminology, these are sometimes called stacked RNNs. So, this works much as you'd imagine. So here's an example of how a multi-layer RNN might work. Uh, if it's three layers. So this is a unidirectional RNN, but it could be bidirectional, um, If you have access to the entire input sequence. So, I guess the, the main thing is that the hidden states from one RNN layer are going to be used as the inputs to the RNN layer that's coming next. Um, any questions on this? Yeah. [inaudible]. That's a great question. So the question I think it's about the order of computation. What order will you compute all of these hidden states in? I suppose there's some flexibility, right? But you could compute all of the step one ones, like all of the V ones and then all of the movie ones, or you could do all of RNN layer one and then all of RNN layer two. So, it's- I think that, um, when you- you know, call the PyTorch function to do a multi-layer RNN, it will do all of RNN layer one, then two, then three. That's what I think happens. But it seems like logically, there's no reason why you couldn't do it the other way. Yep? [inaudible]. Yes, yes. That's a great point as well. Um, so uh, someone pointed out that if they were bidirectional, then you no longer have that flexibility. You would have to do all of layer one before layer two. Yeah, good point. Anyone else? Okay. Uh, so mostly RNNs in practice, um, this tends to perform pretty well, uh, in that when I look at, um, RNN-based systems that are doing very well on some kind of task, they usually are some kind of multi-layer RNN, um, but they certainly aren't as deep as the deep convolutional or feed-forward networks you might have seen in, for example, image tasks. So whereas, you know, very deep convolutional networks, I think hundreds of layers now, um, you certainly aren't getting RNNs that are that deep. So, for example, um, in this paper from, uh, Google, uh, they're doing this kind of large hyperparameter search for neural machine translation to find which kinds of hyperparameters work well for NMT. And in this paper, they found that um, two to four layers was best for the encoder RNN, and four layers was best for the decoder RNN. Uh, you'll find out more about what encoder and decoder mean next time. Um, but those are fairly small numbers. Although they did find that if you add these skip connections or these dense connections, um, then it makes it much easier to learn some even deeper RNNs more effectively, like, maybe up to eight layers, but these certainly aren'tx hundreds of layers deep. And one of the reasons why, uh, RNNs don't tend to be nearly as deep as these other kinds of networks, is that because as we commented before, RNNs have to be computed, uh, sequentially; they can't be computed in parallel. This means that they're pretty expensive to compute. If you have this depth in like, two-dimensions, you have the depth over the timesteps and then the depth over the RNN layer is two, then it beco- it becomes very, very expensive to compute these, these RNNs. So, that's another reason why they don't get very deep. Uh, so again, we just mentioned transformers. Uh, you gonna learn about transformers later. But these, it seems, um, can be deeper fro- from what I can tell of, of what people are using these days. Transformer-based networks can be pretty deep. So, uh, but for example, there's a 24-layer version and a 12-layer version, um, and admittedly, that was trained by Google, and they have a lot of computational power. Um, but I think part of the reason why these transformer-based networks can be quite deep, is that they have a lot of these skipping like connections. In fact, the whole um, innovation of transformers is that they're built on a lot of, kind of, skip connections. Okay, any questions? We're almost done. Okay. All right. So, uh, here's a summary of what we've learned today. I know it's been a lot of information. Um, but I think here are four practical takeaways from today that, uh, are probably useful to you in your projects, even if you, um, uh, even if you didn't find them very interesting in themselves they're probably pretty useful. So, the first one is that LSTMs are very powerful. They're certainly a lot powerful than, uh, more powerful than Vanila RNNs. Um, GRUs are also more powerful than, uh, Vanila RNNs. Uh, and the only difference that is consistently the same is that GRUs are faster than LSTMs. The next one is that you should probably clip your gradients, because if you don't clip your gradients, you're in danger of walking off cliffs and then ending up with NaNs in your model. Uh, the next tip is that bidirectionality is useful if you can apply it. And, basically, anytime when you have access to the entire input sequence, you can apply bidirectionality, so you should probably do that by default. And then the last tip is that multi-layer RNNs are pretty powerful. And again, you should probably do that if you, uh, have enough computational power to do so. But if you're going to make your multi-layer RNN pretty deep, then you might need skip connections. All right. Thanks [NOISE].
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_8_Translation_Seq2Seq_Attention.txt
So welcome to the Machine [NOISE] Translation lecture, which is kind of like a culmination [NOISE] of this sequence of three lectures on RNNs and related topics. So let's have a few announcements first. Uh, the first thing is, as you probably noticed when you came in, we're taking attendance today. Uh, so you need to sign in with the TAs who are outside the auditorium. Uh, if you missed it, don't get up now, it's fine. There will be time to sign in after the lecture. Uh, and then, if you have any kind of questions about special cases with the attendance policy, uh, you should check out a Piazza post that we put up last night with some clarifications. [NOISE] Uh, you have the reminder that Assignment 4 content is going to be covered today. So you're gonna have everything you need to do Assignment 4 at the end of today. [NOISE] And do get started early because the model takes 4 hours to train. The other announcement is that we're going [NOISE] to be sending out our mid-quarter feedback survey sometime in the next few days probably, uh, so please do fill it out. You'll get 0.5% credit, and you're also gonna help us to make the class better for the rest of the quarter. [NOISE] Okay. So here's the overview of what we're going to do today. [NOISE] Uh, today, first, we're going to introduce a new task in NLP, which is machine translation, [NOISE] and then, we're going to introduce a new neural architecture called sequence-to-sequence. And the connection here is that machine translation is a major use case of sequence-to-sequence. [NOISE] After that, we're going to introduce a new neural technique called attention, and this is something that improves sequence-to-sequence a lot. Okay. So Section 1 of this is gonna be about, uh, a bit of machine translation history, pre-neural machine translation. [NOISE] So machine translation or MT, uh, is the task of translating a sentence x, uh, which we call the source language, whatever language you're translating from, into a sentence y, which is in another language, which we call the target language. Uh, so here's an example. Let's suppose x is this French sentence. Um, [NOISE] could anyone in the audience, a French speaker, translate this to English for us? [NOISE] [BACKGROUND] Yeah. Um, the man is born free, and, uh, everywhere, he is in irons. Great. So that was something like, the man is born free, but everywhere, he's in irons. That was a fairly literal translation. It's usually translated, this quote by Rousseau is usually translated as, man is born free, but everywhere, he is in chains. But there's an ambiguity: [NOISE] should fers be, um, literally irons or chains? Also, you could choose to, uh, translate l'homme as man or maybe humankind. Uh, so this is an example of machine translation, and there's already, you know, quite a few choices you can make. [NOISE] So the beginning of machine translation as an AI task began in the early 1950s. So, um, in particular, there was a lot of work translating Russian to English, uh, because the West was very interested in listening to what the Russians were saying during the Cold War. And we've got a fun video here, [NOISE] which shows the state of machine translation in 1954. [MUSIC] They haven't reckoned with ambiguity when they set out to use computers to translate languages. A $500,000 simple calculator, most versatile electronic brain known, translates Russian into English. Instead of mathematical wizardry, a sentence in Russian is to be fed [OVERLAPPING]. One of the first non-numerical applications of computers, [BACKGROUND] it was hyped as the solution to the Cold War obsession of keeping tabs on what the Russians were doing. Claims were made that the computer would replace most human translators. [inaudible] you're just in the experimental stage. When you go in for full-scale production, what will the capacity be? We should be able to do about, with the help of a commercial computer, uh, about one to two million words, uh, an hour, and this will be quite an adequate speed to cope with the whole alphabet of the Soviet Union in just a few hours' computer time a week. When do you have to be able to achieve this feat? If our experiments go well, then perhaps within, uh, five years or so. So in this video, I think there's a number of interesting things. Uh, firstly, we can see an example of about how, uh, AI hype is nothing new. Even in 1954, [NOISE] they were talking this machine translation system as if it was an electronic brain, which I think, uh, overstates maybe how general it is. Uh, they were also, at least some of them, fairly optimistic that this [NOISE] machine translation system was going to be replacing humans, uh, anytime soon. Um, so yeah, that's, that's pretty interesting. And, um, [NOISE] the thing is that these systems actually were mostly rule-based, uh, by which I mean that they were mostly using a bilingual dictionary between Russian and English, and they were essentially mostly just looking up the Russian words, uh, looking up their English counterparts, and they were storing these big bilingual dictionaries on these large magnetic tapes. Um, so certainly, it was a [NOISE] huge technical feat at the time, uh, but they, uh, some people were probably too optimistic [NOISE] about how quickly it would replace humans. So jumping forward several decades in time, uh, now I want to tell you about statistical machine translation. So the core idea of statistical machine translation is that you're going to learn a probabilistic model from the data in order to do the translation. So as an example, uh, as before, suppose that we're translating from French to English. The idea is that you want to find the best English sentence y, given the French sentence x, [NOISE] and, uh, mathematically, you can formulate this as finding argmax y of this conditional probability of y, given x, [NOISE] and the model that you're learning is this probability distribution P. [NOISE] So what we usually do is, we break down this probability into, uh, its two components using Bayes' Rule. [NOISE] So this means that finding the y that maximizes, uh, probability of y, given x, is equivalent to finding the y that maximizes the probability of x, given y, times the probability of y. So the two components here, on the left, we have a translation model, and this is keeping track of how words and phrases should be translated. Uh, so the idea is that it knows, uh, how, uh, French words and an English word might be translated to each other or maybe small, small phrases and chunks of words should be translated. And this is learned from a lot of parallel data, and I'll be telling you later how we do that. The second compo- component P(y), [NOISE] this is just a language model. [NOISE] We learned about this last week. A language model is a system that can predict the next word, but it can also be thought of as a system [NOISE] that tells you the probability of a sequence of words. So here, if we're translating from French to English, P(y) is an English language model. [NOISE] So the idea is the, the reason why we want to break down this single conditiona- conditional probability distribution into the, the pr- product of two different ones is that this is a kind of division of labor. The idea is that instead of, uh, a single conditional probability distribution needing to understand how to translate, and how to write good English text, and understand sentence structure, and everything at once, the idea is that you separate it so that [NOISE] the translation model on the left in blue mostly just knows about a local translation of small chunks of words and phrases, whereas the language model on the right more takes care of writing good English, good sentence structure, word order, and so on. [NOISE] So you already know how to learn a language model [NOISE] because we learned about that last time. You just need lots of monolingual data, in this case, English data. [NOISE] So I'm going to tell you more about how we would learn this translation model that needs to be learned from parallel data. [NOISE] So we need a large amount of parallel data in order to learn this translation model. And an early example of a parallel corpus, is the Rosetta Stone. So this is a stone that has the same text written in three different languages. And this is a hugely important artifact for the people who were trying to understand ancient Egyptian. So in the 19th century, uh, scholars discovered this stone, and it helped them to figure out ancient Egyptian because there was this parallel text that had the the same text in other languages that they did know. So this is, this is a really important parallel corpus, and if you're ever in London, you can go to the British Museum, and see this in person. So the idea is that you get your parallel data. Obviously, you need a larger amount that is on the stone, and hopefully it shouldn't be written on a stone either. But you can use this to learn your statistical machine translation model. So the idea is that, you are trying to learn this conditional probability distribution of x given y. So what we do is we actually break this down even further. We actually want to consider the probability of x and a given y. Where a is the alignment. So the idea of alignment, is this is how the words in the English sentence and the French sentence correspond to each other. So I'm gonna, uh, demonstrate this by an example. So in this example, while we're translating the sentence 'Japan shaken by two new quakes' to French. Then you can see there is a pretty simple one-to-one alignment here, uh, of English words to French words, and also they appear in the exact same order. The only thing that doesn't conform to that is the word 'Le' in French, which we call a spurious word because it doesn't have a direct counterpart in the English sentence, and that's because in English we just say, 'Japan', but in French we say, 'Le Japon'. So alignment can be a bit more complicated than that. For example, alignment can be many-to-one. In this example, you have, uh, several French words that have multiple English words that correspond to them. So this is what we call many-to-one alignment. Uh, it can go in the other direction too. Alignment can be one-to-many. So here we have a single English word implemented, which has a one-to-many alignment because there is a three-word French phra-phrase that corresponds to it. So on the left and the right, we have two ways of depicting the same alignments. It's either a kind of chart or it can be a, a graph. So here's another example, um, of a one-to-many, well, sorry, right. So we call, uh, this word implemented, that is one-to-many. We call it a fertile word because the idea is that it has many children in the, in the target sentence. So in fact, there are some words which are very fertile. Here's an example where the source sentence, 'il m'a entarte', means, 'he hit me with a pie', and here in French, this verb, 'entarte' means, uh, to hit someone with a pie, [LAUGHTER] and this word has no single word equivalent in English. We don't have a single verb that means to hit someone with a pie. [LAUGHTER] Which I think that is really fun, that French has a word. You wonder, maybe they do it so often that they need a single word for that. I don't know. [LAUGHTER] So this is an example of a fertile word, right? Because it needs to have several corresponding English words to translate it. So we can have one-to-many, and many-to-one. You can also have many-to-many alignments. You could call that kind of phrase level translation, or phrase-to-phrase. So here, uh, the English sentence says, 'The poor doesn't- don't have any money', and here don't have any money corresponds to the French phrase, 'sont demunis', and this is a many-to-many alignment because there is no obvious way to break down this phrase-to-phrase alignment into, um, smaller word-to-word alignments. Okay. So that's what alignment is. And if you remember, we were thinking about how would you learn this probability distribution of what the alignment is, uh, in order to do statistical machine translation. So the idea is that you learn probability of x and a, given y as a combination of many factors or many features. So you consider for example, what's the probability of a particular word aligning to another particular word? Like you know, this English word and this French word, how often do they align? But then, this also depends on for example, what's their position in the sentence? Like if they both appear near the end of the sentences, then it's more likely that they align, whereas, if one's at the beginning and one's at the end, that's less likely. You would also consider things like, uh, what's the probability of this particular French word having this particular fertility? Like, what's the probability of this word having three corresponding English words and so on? So all of these statistics are learned from your parallel data, and there's many other things that you would take into consideration. So we're looking at a kind of overview of statistical machine translation today. You're not going to understand it in full detail, but we're understanding an overview of how it works, because we're going to be, uh, comparing it to neural machine translation. Okay. So we're learning this SMT system, and so far, we've broken it down into these two main components. We've got the translation model, and we've got the language model, and we understand a little bit about how you might learn this translation model by breaking it down into alignments. So the question remains, how do you do the argmax over y? How do you find your French sentence y that maximizes this probability? So one kind of brute force solution is you could say, "'let's enumerate every possible y." That's kind of every possible sequence of French words, maybe up to some length, and, uh, we'll calculate this probability for all of them, and it should be pretty clear that that is just a no go. That's way too expensive, and we're not going to be able to get anywhere with that. So the answer for how you actually do this in practice is, you are going to use some kind of heuristic search algorithm, to search for the best translation, y. Uh, but along the way, you're going to discard hypotheses that are too low probability. So you're gonna search, you're going to discard, and prune the trees as you go to make sure that you're not keeping too many hypotheses, uh, on each step. So this process of finding your best sequence is also called decoding. So here is an overview of how that works for SMT. This an example where you have this German sentence that translates to, 'He does not go home', and you can see that there is some kind of phrase-to-phrase alignment here. So, uh, an overview of how this decoding would work in SMT, is that you kind of consider lots of different hypotheses, for how you might translate these individual words, uh, and then you build it up to consider how you might translate, uh, individual phrases, and the phrases get bigger. So for example, you can see that on the top right, if it's not too small, you can see that the, uh, the German word for house, uh, could be translated into the English word, 'house' or 'home', or 'chamber', and so on. Uh, so we consider all of these different hypotheses, and look into how we might put those together to translate phrases but you don't keep all of them all the time. You get rid of the ones that are too low probability. So this can also be depicted as a kind of a tree, where you are exploring different options. You are searching through the space of options, but then you prune the tree as you go. So I know this is a very, very high level, uh, description of how decoding might walk. And in fact, later in this lecture, you're going to see a detailed explanation of how this kind of decoding works for neural machine translation. Okay. So what's our, um, overview of statistical machine translation, uh, was it effective? Uh, so SMT was a huge research field, uh, from the 1990s to about, maybe, uh, 2013. And the best systems during this time were extremely complex. They were extremely sophisticated and impressive systems and, uh, SMT made the best machine translation systems in the world. But they were very complex. So for example, you know, there were hundreds of important details that we haven't mentioned here at all. There were many, many techniques to make it, uh, more complex and more, um, sophisticated than what I've described today. In particular, the systems had to have many separately designed, uh, sub-components. So we already saw how you, uh, break down the translation model into two separate parts. Uh, but there was, you know, many more sub-components than that, and often they had to be learned separately. This meant the engineers had to do a lot of feature engineering. Uh, you have to design features to capture the particular language phenomena that you were interested in. So this meant that they had to require a lot of compiling and maintaining of extra resources, and in fact, you had to have, uh, different resources for different languages. So the work kind of multiplied the more languages you had. An example of this, is you had to have, uh, tables of equivalent phrases. So for example if you're doing French and English translation, then, uh, they would be collecting these phrases of, uh, sorry these tables of phrases that they considered similar, and those were learned from the data. But this was a lot of information that had to be stored and maintained. So overall, this was just a lot of human effort to maintain. Uh, and again, yes, you had to put more human effort in if you wanted to learn an SMT system for a new language pair. Okay, are there any questions here about, uh, SMT? [NOISE] Okay. Uh, so moving on, that's SMT. [NOISE] Now, we're gonna move on to, uh, section two of this lecture. So I want to take you back to the year 2014, for a dramatic re-enactment of what happened in the world of machine translation research. So in 2014, something very dramatic happened and that thing that happened is called neural machine translation, and [LAUGHTER] I think it looks a little bit like this [NOISE] if I'm not being too dramatic. So what is neural machine translation? The idea is that NMT is a way to do machine translation but using just a single neural network. [NOISE] And the neural network architecture that they use is called Sequence-to-Sequence or sometimes just called seq2seq, uh, and involves two RNNs. So, uh, it's called sequence-to-sequence, because you're mapping one sequence to the other. The source sentence [NOISE] to the target sentence and you need two RNNs, basically to handle those two different sentences. All right, lets look at the diagram to see what sequences-to-sequence is in detail. So we start off with our source sentence, and we're gonna use our example from before il a m'entarte, which means, he hit me with a pie. So we, uh, feed this into our encoder RNN, and this is as you've seen before, I've drawn a uni-directional RNN, but this could be bi-directional. It also could be multi-layer. It could be vanilla, or it could be LSTM, and so on. Uh, another thing to note is that [NOISE] we are passing word embeddings into this encoder RNN, but I'm just not explicitly depicting that step. [NOISE] Okay. So the idea of the encoder RNN is that it's going to produce some kind of encoding of this source sentence. So for now, let's assume that the encoding of the source sentence is going to be, uh, the final hidden state of this encoder RNN. So what happens next is we pass this encoding of the source sentence. We pass it over to the decoder RNN, which is going to translate into English. So the decoder RNN is a language model. In particular, it's a conditional language model, like we talked about last time. So it's conditional because it's going to produce the target sentence, but conditioned on this encoding, and the encoding is that vector that has the orange box around it. So how does this work? Uh, we start off by feeding, uh, the start token into the decoder, and then, uh, we can get the first state of the decoder, because we're using, uh, the encoding of the source sentence as the initial hidden state for the decoder. So then we get our first output from the decoder, which is a probability distribution of what word might come next, and let's suppose that we take the argmax over that, and then that gets us the word, uh, he. Which is in this case is correct, because that's probably the word you should start with. Okay, so then we just take the word, he and then we feed it back into the decoder on the next step, and then we do the same thing again. We take argmax and we get a new word and we get he hit. So the idea here is you can co- uh, continue doing this operation and in that way, you're going to generate, uh, your target sentence, uh, which will be something like he hit me with a pie, and you stop once your decoder produces the end token. So an important thing to note here, is that this picture is showing you what happens at test time. This shows you how to generate text. Uh, this isn't what happens during training. I'll show you what happens [NOISE] during training later. Uh, but this thing with the, the pink dotted arrows where you feed the word back in. This is what you do to generate text at test time. Any questions on this? Uh, oh, another thing I should note is that you need two separate sets of word embeddings, right? You need word embeddings for French words, and you need English word embeddings, so that's kind of two separate sets, two separate vocabularies. Um, yeah. Okay. So as a side note, uh, this architecture called sequence-to-sequence is actually pretty versatile. It's not just a machine translation architecture. Uh, you can, uh, uh, phrase quite a few NLP tasks as sequence-to-sequence tasks. Uh, so for example a summarization is a sequence-to-sequence task because in goes your long text and out comes your short text. Uh, dialogue can [NOISE] be seq2seq because in goes the previous utterance and out comes your next utterance, uh, parsing can even be thought of as a sequence-to-sequence task, because you could say in goes the input text and the output parse is going to be expressed as a sequence. This might not be the best way to do parsing but it is a way you can try. Lastly, you could even do something like code generation. So suppose you want to build a system that takes some kind of, uh, natural language input, such as sum up the numbers from 1-10 and then it outputs, let's say some Python code that says, sum open brackets range 10 or something like that. Uh, so if you wanted to train, um, an assistant to do this. You could in a way view that as a translation task, where you're translating from English to Python. It's a pretty challenging translation task. It probably requires a lot more logic than just uh, you know, French to English [NOISE] but you can try and people have tried. There are research papers where people have used seq2seq to do this kind of task. Okay. So to recap, uh, seq2seq is an example of a conditional language model. Uh, it's a language model because the decoder is a language model that's predicting the next target words. But it's a conditional language model because it's also conditioning on your source sentence which is represented by the encoding of the source sentence. So you could look, you could view it like this. NMT is directly calculating the probability of the target sentence y given the source sentence x. So if you look at this, you see that this is just, uh, breaking down the probability of the sequence y, which we suppose is of length, uh, capital T. You can break it down into the being the probability of the first word of y given x and then the probability of the second word of y given, uh, the words that came before, and x, and so on. So in fact, you can see that each of the terms in this product on the right, those are probabilities of the next target word given all the ones so far, and also the source sentence, and that's exactly the conditional probability that your language model produces. So the reason I'm highlighting this is because if you remember in SMT, uh, we didn't directly learn the translation model p of y given x, we broke it down into, uh, uh, smaller components. Whereas here in NMT, we are directly learning this model. And this is in some ways an advantage because it's simpler to do. You don't have to learn all of these different systems and optimize them separately. It's, uh, kind of, simpler and easier. So, uh, this is, this is the model that we're learning. Uh, the question is, how do we train this NMT system? So hopefully, you should already have a good idea of how this would work, given that we've already seen how you would train a language model. But here are the details just in case. So you get your big, uh, parallel corpus. Uh, and then, uh, let's say you have your sentence pair from your parallel corpus. Uh, so this is what happens during training. Uh, you feed your source sentence into the encoder RNN, uh, and then you feed your target sentence into the decoder RNN, and you're going to pass over that final hidden state to be the initial hidden state of the decoder. And then, uh, for every step of the decoder RNN, you're going to produce the, uh, probability distribution of what comes next, which is the, the y hats. And then from those, you can compute your loss. And the loss is just the same as we saw for, u h, unconditional language models. It's, uh, the cross entropy or you could also say negative log-likelihood of the true next word. So for example, on those selected ones, uh, the loss is the negative log probability of the correct next word. And then as before, we're going to average all of these losses to get the total loss for the example. Uh, so a thing you might notice people saying in, for example, research papers is this phrase end-to-end. And this is an example of learning a system end-to-end. And what we mean by this is that the backpropagation is happening end-to-end, one end is, is losses, the loss functions, and the other end I guess is kind of like the, the beginning of the encoder RNN. The point is that you, uh, backpropagation, uh, flows throughout the entire system, and you learn the entire system with respect to this single, uh, loss. Yep? [inaudible] The question is, if the decoder RNN outputs the end token too early, then how can you measure the loss on, uh, the words that came after that? So this is the difference between training time and test time, which is pretty confusing. So, uh, during training, we have this picture where you feed the token back in. So in this scenario, once you produce end, then you have to stop because you can't feed end in as the initial next step. But in training, you don't feed the thing that you produced into the next step. During training, you feed the target sentence from the corpus. So like the gold target sentence into the model. So no matter what the, uh, the decoder predicts on a step, you kind of, you don't use that for anything other than computing loss. Any other questions? Yeah. Is there a reason why you would, uh, backpropagation end-to-end instead of maybe training an encoder like [inaudible] model and then [inaudible] together? The question is, is there a reason why you would want to train end-to-end when, for example, you might want to train the encoder and the decoder separately? Uh, so I think, uh, people view training end-to-end as favorable because the idea is that you can optimize the system as a whole. You might think that if you optimize the part separately, then when you put them together, they will not be optimal together necessarily. So if possible, directly optimizing the thing that you care abou- about with respect to all of the parameters is more likely to succeed. However, there is a notion of pre-training. And as you said, maybe you'd want to learn your, um, decoder RNN as a kind of a language model, an unconditional language model by itself. And that's something that people do. You might, uh, learn a very strong language model, and then use that to initialize your decoder RNN, and then fine-tune it on your task. That's a, a valid thing you might try to do. Yep. Are you always [inaudible] The question is, is the length of the source sentence and the length of the target sentence fixed? So for example, is the source sentence always length 4? Uh, no. That's definitely not true because in your parallel corpus, you're going to have sentences of all lengths. Uh, so this is more kind of an implementation or a practicality question. Uh, the idea is that this is what you mathematically want to be computing during training for each example, and you're going to have batches of examples. But the question is, how do you actually implement them in, uh, in practice? So what you usually do just because it's easier to assume that your batch is this kind of even-sized tensor where everything is the same length, is you pad any short sentences up to some predefined maximum length, or maybe the length of the maximum example in your batch, uh, and then you make sure that you don't use any hidden states that came from the padding. Yep. I believe two languages together [inaudible] possible to have a system [inaudible] that will be kind of universal with similar languages or something like that? Okay. So the question I think is, uh, it seems like sometimes you wouldn't want to train things end-to-end, and there are circumstances in which you might want to train things separately, and you mentioned, for example, having, uh, different languages mapped to each other. So this is a totally valid point, and in fact, uh, so far we've, kind of, assumed that you want to learn language A to language B as a pair, right? And that's different to language A to language C or even language B to language A. And, um, that does mean you have kind of n-squared many systems in the number of, uh, languages you're considering. So, yeah, that's actually a valid idea, and this is something that people have researched. The idea that maybe you could have a, kind of, mix and match with your encoders and decoders. And you could try to, uh, train a kind of general purpose, let's say English decoder and then match it up with your different encoders. Uh, but this is, I think fairly complex to train to, to make sure that they all work together. But that, that is certainly something that people have done. Let me just check on the time. Okay. Let's take one more question. Yep. So does the word embedding also come from the same corpus that we are training on? The question is, does the word embedding also come from the corpus that you're training on? So I think there's a few options just as we saw with language models; you could download, uh, pretrained word vectors like Word2Vec or GloVe, and you could use those. And then you can either, kind of, freeze them or you could fine-tune them as part of the end-to-end training, or you could just initialize your word vectors as, uh, you know, close to zero random and then learn them from scratch. All right. Okay, moving on. Uh, so now we understand how you would train a neural machine translation system. And we talked briefly about how you might, uh, do decoding or generation. So what I showed you before is something called, uh, greedy decoding, which is this idea that on each step, you just choose the argmax, the top one best word, and then you feed that in on the next step. So this is called greedy decoding because you're just taking the best, uh, the best option that you can see right now, and then you really don't have a way to go back. So can anyone see a problem with this method? Maybe I've kind of given it away but, uh, yeah. [inaudible]. You said too expensive. Um, I guess I mean it is expensive in that you have to do a sequence and the sequence is usually worse than something you can do in parallel. But I suppose, um, maybe what's wrong with the greediness? Can anyone suggest what's wrong with the greediness, yeah? [inaudible] [NOISE] That's not necessarily gonna give you the argmax over the entire sentence. That's exactly right. That's, uh, kind of what, uh, what greediness means. So in practice, this might give you something like this. Uh, we're trying to translate our running example sentence and let's suppose on the first step we say, "He," and then we say, "He hit," and then we say, "He hit a," oh no, that wasn't right. That wasn't the best thing to choose but we kinda have no way to go back now, right. We just have to continue and try to make the best of it after saying, "He hit a," which isn't gonna work out well. So that's the main problem with greedy decoding. There's kind of no way to backtrack, no way to go back. So how can we fix this? And this relates back to, uh, what I told you earlier about how we might use, uh, a kind of searching algorithm to do decoding and SMT. Uh, but first, you might, uh, think exhaustive search is a good idea. Well, probably not because it's still a bad idea for the same reasons as before. So if you did want to do exhaustive search, and search through the space of all possible French translations, uh, then you would be again, trying to consider which Y maximizes, uh, this product of all of these individual probability distributions. So as before, if you try to do this, uh, then on each step T of the decoder, you're gonna be having to track V to the power of T possible partial translations, uh, where V is your vocabulary size. So here when I say partial translation, I just mean, uh, a kinda, you know, like, half of a sentence so far, or something like that. So, of course, this, uh, exponential in V complexity is just far too expensive. So yes, we're gonna use some kind of search algorithm, and in particular, we're gonna use a beam search decoding. So the core idea of beam search decoding is that on each step of the decoder, you're gonna be keeping track of the K most probable partial translations, and we call partial translations hypotheses, because we're kind of tracking multiple of them and we're not sure which one is best. So we're thinking about several. Here K is an integer and we call this the beam size, and in practice for NMT this is usually maybe 5-10. So you can think of, uh, K kind of as how big is your search space at any one time. So if you increase K, then you're going to be considering more different options on each step and you might hope that this will mean that you get the best quality solution in the end though of course it'll be more expensive. So I said that we want to keep track of the K most probable partial translations, that is, hypotheses. So this means that we need some kind of notion of, you know, how probable is this hypothesis or what's its score. So the score of a hypothesis and, uh, we're representing that as Y_1 up to Y_T, um, is just its log probability. So, uh, the log probability of this partial translation, uh, according to the language model can be broken down as we saw before into the sum of the individual log probabilities of the words given everything that came before. So it's, if it's not obvious, uh, these scores are all negative because we're taking log of, uh, of a number between 0 and 1. Uh, and a higher score is better. Yes, because you want a higher probability of, uh, of the hypothesis according to the language model. So the idea is that we're gonna use this score, uh, and the search algorithm to search for high-scoring hypotheses and we're gonna track the top K on each step. So I'm gonna show you a detailed example in a moment, but the important thing is to know that beam search is not guaranteed to find an optimal solution. Uh, exhaustive search, the one where you enumerate, enumerate all V to the T possible translations, that is guaranteed to find the optimal solution but it is just completely infeasible because it's so expensive. So beam search is not guaranteed to find the optimal solution, but it is much more efficient than exhaustive search of course. Okay. So, um, here's an example of beam search decoding in action. Uh, so let's suppose the beam size equals K, uh, is 2 and then as a reminder, we have, uh, this is the score that you apply to a partial, uh, hypothesis, uh, a partial translation, which is a hypothesis. So we start off with our starting token, and the idea is that we're going to compute the probability distribution of what word might come next. So having computed that probability distribution using our seq2seq model, then we just take the top K, that is top two possible options. So let's suppose that the top two are the words "He" and "I". So the idea is that we can compute the score of these two hypotheses, uh, by using the formula above. It's just the log probability of this word given the context so far. So here, let's say that "He" has a score of -0.7 and "I" has a score of -0.9. So this means that he is currently the best one. Okay. So what we do is, uh, we have our two, uh, K hypotheses, and then for each of those, we find the top K words that could come next. And we calculate their scores. So this means that for both "He" and "I" we find the top two words that could come next. And for each of these four possibilities, uh, the score of the hypothesis is equal to, uh, the log probability of this new word given the context so far plus the score so far because you can accumulate the sum of low probabilities. You don't have to compute it from scratch each time. So here you can see that we have these four possibilities and that the top two scores are -1.6 and -1.7. So this means that hit and was are the two best ones. So the idea is that of these K squared equals 4 hypotheses, we're just gonna keep the K equals 2 top ones. And then we just keep doing the same thing. For these two, we expand to get the two next ones. And then of those we compute the scores, and then we keep the two best ones and discard the others and then of those, we expand. So we keep doing this again and again, expanding and then just keeping the top K and expanding like this until, uh, you get some kinda, uh, finished translation. I'm going to tell you more in a moment about what exactly the stopping criterion is. But let's suppose that we stop here. Uh, looking at the four hypotheses that we have on the far right, the one with the top score is, uh, the top pie one with -4.3. So let's suppose that we are gonna stop now when we decide that this is the top hypothesis, then all we need to do is just backtrack through this tree in order to find the full translation, which is "He hit me with the pie." All right. So, um, let me tell you more detail about how exactly we decide when to stop. So if you remember in greedy decoding, usually we just keep decoding until the model produces the END token. So for example, this means that your model is actually producing the sequence, uh, I guess it doesn't produce START, you give it START but then it produces the sequence "He hit me with a pie" END. So the problem in beam search decoding is that you're considering all these different hypotheses, K different hypotheses at once and the thing is those hypotheses might produce END tokens at different times. So there's no one obvious place to stop. So what we do in practice, is when a hypothesis produces the END token, then we regard this hypothesis as complete and we kind of place it aside. We have a collection of completed hypothesis. So we kind of take it out of beam search, we no longer keep exploring it because it's finished, uh, and we, yeah, place it aside. And you continue exploring other hypotheses with beam search. So the remaining question is when do you stop doing beam search? When do you stop iterating through this algorithm? So there's, uh, uh, multiple possible stopping criterion but two common ones are you might say, uh, we're gonna stop doing beam search once we reach time step T, where T is some, uh, predefined threshold that you choose. So you might say, uh, we're gonna stop beam search after 30 steps because we don't want any output sentences that are longer than 30 words for example, or you might say, "Uh, we're gonna stop doing beam search once we've collected at least N completed hypotheses." So you might say, "Uh, I want at least 10 complete translations before I stop doing beam search." Okay. So what's the final thing you have to do? Uh, we finished doing beam search, um, we have this collection of completed hypotheses. Uh, we want to choose the top one. Uh, the one that we're going to use is our translation. So, uh, how do we select the top one that has the highest score? Uh, you might think this is simple given that all of these hypotheses already have scores attached. But if we just look at this, uh, formula again, uh, for what the score is of each hypothesis. Uh, can anyone see a problem with this? If we have our sets of hypotheses, and then we're choosing the top one based on the one that has the best score, can anyone see a problem? Yeah. [NOISE] So the answer was you're gonna end up choosing the shortest one. The problem here is that longer hypotheses have lower scores in general because you're multiplying more probabilities so you're getting a smaller, a smaller overall value or I guess if we're adding low probabilities we're gonna get more negative values. So it's not quite that you will definitely choose the shortest hypothesis because if you could overall have, uh, a lower score but there's definitely going to be a bias towards shorter translations, uh, because they'll in general have lower scores. So the way you can fix this is pretty simple, you just normalize by length. So instead of using the tools we have above, you're going to use, uh, the score divided by [inaudible]. And then you use this to select the top one. Any questions on this? [NOISE]. Yeah. Can we train with the END token so that it is possible to [inaudible] I didn't quite hear that, can you train with the END token? Yeah, like we had an END token. Yes. So you train with the END token, if that's your question. Um, because the whole point is you're relying on your language model, your decoder to produce the END token in order to know when to stop. So you need to train it to produce the END token by giving it examples of training sentences with END tokens. Yeah. Why don't we use this score being changed [inaudible] Great question. The question is, why don't we use this normalized score, the one at the bottom of the screen during beam search in the first place? So the reason why that's not necessary, you could, but it's not necessary, is because during beam search, we only ever compare the scores of hypotheses that have the same length, right? So in each of these steps, the way we look at, let's say the top k squared and we want to choose which ones are the top k, we're comparing the scores of four different hypotheses that are of length, one, two, three, four, five. So, um, it's true that these scores are getting lower and lower, but in the same way because they're all length five right now. [NOISE] Okay. So we now understand how you would train an NMT system and how would you- you would use your trained NMT system to generate your translations using, let's say, beam search. So let's all take a step back and think about, what are the overall advantages of NMT in comparison to SMT? Uh, so the first advantage is just better performance. Uh, NMT systems tend to give better output than SMT systems in several ways. One is that the output often tends to be more fluent. Uh, this is probably because NMT, uh, this is probably because RNNs are particularly good at learning language models as you learned last week. Uh, another way that they're better is they often use, uh, the context better, that is, uh, they're better at conditioning on the source sentence and using that to change the output. Another way they're better is they often, uh, are more able to generalize what they learn about phrases and how to translate them. So for example, if it sees an example of how to translate a certain source phrase and then later it sees a slightly different version of that source phrase, it's, uh, more able to generalize what it learned about the first phrase than SMT systems will. Another big advantage of NMT systems compared to SMT that we talked about before is that it's a single neural network that can be optimized end-to-end. And the- the advantage here I suppose is primarily simplicity and convenience. So there's no subcomponents that need to be individually optimized. Another big advantage is that it requires much less human engineering efforts. When I told you earlier about all the different things that people had to do to build, uh, big, uh, powerful SMT systems, uh, there's relatively less engineering effort for NMT. And NMT is certainly not easy, but it's- is less complicated than SMT. In particular, there's no feature engineering. You don't have to define what features of, uh, linguistic phenomena that you want to capture. You can mostly just view it as a sequence of words although, uh, there are different views on that. Uh, lastly, a great thing about NMT is that you can use pretty much the same method for all language pairs. So if you've, uh, you know, built your French-to-English translation system and now you want to build a Spanish-to-English one, [NOISE] uh, you can probably use basically the same architecture and the same method as long as you can go find a big enough parallel corpus of Spanish-to-English. All right. So what are the disadvantages of NMT, uh, remaining? So compared to SMT, there are some disadvantages. One is that NMT is less interpretable. Uh, what I mean by this is you feed in your source sentence into the neural network and then it feeds out some target sentence and you didn't really have any way to figure out why that happened, right? So in particular, if the target sentence con- contains some kind of error, um, you can't really look at the neurons and understand what happened. It's pretty hard to attribute errors. So this means that, uh, NMT systems are pretty hard to debug. So by comparison, SMT systems were more interpretable in that you had all of these different sub-components that were doing different jobs. And, uh, you were more able to look at those. They weren't, you know, neurons often would be, uh, you know, probabilities of certain words given other words and so on. And, you know, that's by no means easy to interpret but it was at least more interpretable than NMT. Uh, another disadvantage is NMT is pretty difficult to control. So, uh, for example, if your NMT system is, uh, doing a particular error, it's not very easy for you, the, uh, programmer to specify some kind of rule or guideline that you want the NMT system to follow. So for example, if you want to say, I want to always translate this word in this way. Um, when- when this other thing is present, like that's not particularly easy to, uh, to impose as a rule on the NMT system, uh, because you can't, uh, easily control what it's doing on a step-by-step basis. So sometimes you have some kind of, uh, post-processing rules you might try to do, but overall you can't. It- it's- it's harder than you'd expect to try to, um, impose a fairly simple form. [NOISE] So this means it has some kind of safety concerns in fact. Because, uh, let's say, you know, you don't want your NMT system to say bad things, right? It's- it's pretty hard to actually put, um, these, uh, controls in place to stop it from saying these things that you don't want it to say. I mean, on the level of maybe just never saying particular bad words, then sure you can remove them from the vocabulary. But overall they're pretty hard to control, and we're actually gonna see some examples of NMT systems being, you know, doing things that their uh, designers certainly didn't intend. Okay. So, uh, how do we evaluate MT? Uh, every good NLP task needs to have an automatic metric so that we can, uh, measure our progress. So the, uh, most commonly used evaluation metric for MT is called BLEU and that stands for Bilingual Evaluation Understudy. So the main idea is that BLEU is gonna compare the translation that's produced by your machine translation system. It's gonna compare that to one or maybe several human written translations of the same sentence. And then it's gonna compute a similarity score that's based on n-gram precision. So when I say n-gram precision, I mean you're gonna look at all the one, two, three, and four grams that appear in your, uh, machine written translation and your human written translation. And then n-gram precision is basically saying, for all of the n-grams that appeared in the machine-written translation, how many of those appeared in, you know, at least one of the human-written translations? Another thing that you need to add to BLEU is a brevity penalty. Uh, so you're saying that you get a lower BLEU score if your system translation is significantly shorter than all of the human-written translations. And the reason why you need to add this is because n-gram precision alone doesn't [NOISE] really punish using, uh, fewer words. So you might try to maximize n-gram precision by being very conservative and writing, uh, short sentences that only contain words that you're really sure about, and then you get a good precision score. But this doesn't make a good translation because you're probably missing a bunch of information that you needed to translate from the source sentence. So that's why you need to add the brevity, uh, penalty. So overall, um, BLEU is very useful because, uh, we need an automatic metric in order to, uh, measure progress, you can't measure progress on human evaluation alone because it takes too long [NOISE] to compute. Um, but of course it's pretty and perfect. So for example, you can think about how there are many ways- many valid ways to translate a sentence. At the very beginning of this lecture, I asked how do we translate that sentence, uh, by Rousseau and there were at least a few different options that came up. Uh, so if there's many valid ways to translate a sentence, how does BLEU recognize that? BLEU is [NOISE] rewarding sentences that have a high n-gram overlap with, uh, one or some of the human-written translations. But, if, uh, you write one, if your model writes one valid translation and the humans wrote a different valid translation and they don't have high n-gram overlap, then BLEU is going to, uh, give you a low score. So, um, you're going to learn about BLEU in detail in Assignment 4, and in fact Assignment 4 has a full description- mathematical description of what the BLEU score is. So I'm not gonna tell you about that now, uh, yes, so you're gonna think about BLEU and the- the ways in which it's imperfect but useful. Yeah. So would one- one n-gram, be a one to one equivalency? What? Would a one n-gram be a one to one equivalency? The question is, would a one n-gram be a one-to-one equivalency? I'm not sure I understand the question. You're asking about alignment or something else? Uh, just trying to get an idea about how they're doing n-gram checks, is it doing all n-gram permutations or is it doing like window size of one? Well, I guess one- one n-gram it doesn't make a difference because you can't permute a one-gram. Okay. So you're asking for examples, for four grams are they checking, uh, whether this exact sequence of four paired or any permutation of it, its exact sequences? So by definition, n-grams are sequences where the order matters. Okay. All right. So, uh, that's how you evaluate machine translation. So now you can understand this metric of how we evaluate our progress on machine translation, um, I can show you this graph and you might understand what it means. So this is a, uh, bar plot which shows in a nutshell how NMT changed the machine translation, uh, landscape in just a few years. So in this plot, we've got BLEU score is the Y-axis. Uh, and you have two different types of SMT which is the red and the dark blue, uh, bar plots. And what's happening is, uh, in 2015, uh, Neural MT enters the scene for the first time and it isn't doi- doing as well as SMT, and then the next year it's suddenly outperforming SMT. And here these are BLEU scores on some particular fixed dataset like, uh, a shared task that many people were, um, submitting systems for. [NOISE] So the main thing to notice here is that the progress that was being made by SMT systems was, you know, a fairly gentle increase in BLEU year-by-year. And then in just one year, NMT arrives and is suddenly doing, uh, much more rapid progress. So I think this justifies why the picture of the meteor maybe isn't too Jurassic here. So you could in fact call NMT the biggest success story of NLP in deep learning. Uh, because if you think about the history of this, NMT went from being a fringe research activity in 2014 to being actually the leading standard methodfor machine translation in the world in 2016. In particular, in 2014, the first seq2seq paper was published. And in 2016, Google Translate switches from SMT to NMT. This is a pretty remarkable turnaround for just two years. So this is amazing, not just because it was a quick turnaround, but also if you think about the level of human effort involved. Uh, these SMT systems, for example the Google Translate SMT system was built by doubtless hundreds of engineers over many years. And this, uh, this SMT system was outperformed by an NMT system that was trained by, uh, you know, relatively few like a handful of engineers in a few months. So I'm not- I'm not diminishing how difficult it is to, um, build NMT systems, and certainly I'm sure Google's NMT system today is built by more than a handful of engineers in a few months. I'm sure it's a very big operation now. Uh, but when NMT, uh, began to outperform SMT, it was pretty remarkable how it was able to do that, uh, based on the amount of effort involved. Yeah. Given the [inaudible] cons of NMT has there been research on combining the two and if there is, what does that look like? Yeah, great. The question is given that we know that there are some disadvantages of NMT even in comparison to SMT, is there any work on combining the two? So, yes. I think there is. Uh, there's a lot of NMT research ongoing and in particular, people sometimes focus on these particular shortcomings. And, uh, there's a lot of work in kind of taking techniques and ideas and wisdom from the many decades of SMT research, and then integrating them into the new NMT paradigm. So yes. [NOISE]. Okay. So is machine translation solved? Can we all go home? I think the answer is clearly no. Uh, NMT definitely is not doing machine translation perfectly. So, um, just to highlight some of the difficulties that remained with NMT. Uh, one is out-of-vocabulary words. Uh, this is a kind of basic problem but it is pretty tricky. You know, what do you do if you're trying to translate a sentence that contains a word that is not in your source vocabulary, or what if you're trying to produce a word that's not in your target vocabulary? Um, there's certainly been lots of work on doing this, and you're going to hear later in the class how you might try to attack this with for example, uh, sub-word modeling can make it easier. Uh, but this is a significant problem. Another one is domain mismatch. So let's suppose that you train your machine translation system on a bunch of fairly, uh, formal text, like let's say, uh, Wikipedia or something like that. Uh, but then you try to deploy it to translate informal text, like people chatting on Twitter or something. Then often, you'll find that it doesn't perform very well on this different domain because you've got a domain mismatch. Uh, so that's quite a big problem. Another one is maintaining context over longer text. So everything we've talked about so far has assumed that you were just translating a single sentence to a single sentence, and there's no other wider context. Uh, but, you know if you want to use a machine translation system to translate a whole news article and maybe even a book, then you're probably going to want to use the context that came in previous sentences in order to translate things correctly in the current sentence. So, uh, this is an active area of research, how can you get an NMT system to condition on larger pieces of context without it becoming too expensive and so on? Another difficulty is low-resource language pairs. Um, everything we've talked about so far has assumed that you have access to a very large parallel corpus, but what if you don't? What if you are trying to translate to or from a language that has relatively little text available, um, online for example? So that can be pretty difficult. Here a few examples of machine translation screwing up, uh, with specific errors. So, here's an example of how common sense is really difficult for NMT systems. On the left, we have the English phrase paper jam, which means when your printer gets jammed up with paper and it's all, uh, tangled inside. And then on the right, we have a very literal translation of that into Spanish, and it's essentially saying jam, edible jam made of paper, which clearly isn't the right interpretation. So here, we have an NMT system that's just doing very literal translation and clearly doesn't have any notion of common sense. You can't make jam from paper. Uh, here's another example. NMT can pick up biases in the training data. We already talked about this at the, uh, the word embedding level, the representation of words. Uh, but it can also be a problem at the you know, the sentence level when you're translating things. So here in this example, uh, on the left, we have two sentences in Malay that roughly mean, uh, they work as a nurse, and they work as a programmer. The point is on the left, there is no information about gender in the pronouns. But when it gets translated to English, then we've suddenly got gender coming out of nowhere, she works as a nurse, and he works as a programmer. This is likely happening because in our training data, we had more examples of female nurses and male programmers. So you can understand why from a machine learning, uh, maximizing the objective point of view the, uh, English language model has learned to do that. But the problem here is this isn't good machine translation. Uh, here the system is making up information that was not present in the source sentence. So this is certainly an error that the machine translation shouldn't be doing because it's just simply inaccurate. And even worse, it's propagating, uh, gender roles. Here's another pretty weird example. [LAUGHTER] What is happening here? Uh, on the left, we have a nonsense sentence, this is just kind of a syllable repeated. And we're supposedly translating from Somali. Uh, and then we're asking to translate this into English and then we're getting this out of nowhere. Um, as the name of the Lord was written in the Hebrew language, it was written in the language of the Hebrew nation, and you might be thinking "Where on earth did that come from?" And in fact, this got reported in the media as you know, Google Translate wants to convert you to its religion or whatever. [LAUGHTER] Um, so for sure, it is very startling. But the thing is there's actually quite a reasonable explanation. So what's going on here is that, um, often for low resource languages, such as for example Somali, um, one of the best resources of parallel text is the Bible. So you train for example Somali to English using the Bible as a training text, maybe among other texts. Okay, that's the first puzzle piece. But the other puzzle piece is the nonsensical input. So when the input isn't really Somali or any kind of text, right? It's just the same syllable over and over. Then the NMT system doesn't really have anything sensible to condition on. Its basically nonsense, it's just noise. So what does the NMT system do? Right? It can't really use, it can't really condition on the source sentence. So what it does, is it just uses the English language model, right? You can think of it as like the English language model of the decoder RNN just kind of goes into autopilot and starts generating random text, kind of like we saw last week when we saw, uh, a language model trained on Obama's speeches or Harry Potter would just generate texts in that style. That's kind of what's happening here with the Bible, because we don't have any useful information, um, from the sentence on the left. Um, so, this is an example why, uh, neural machine translation in particular makes these kinds of errors, uh, because the system is uninterpretable. So you don't know that this is going to happen until it happens, and perhaps Google didn't know this was going to happen until it happened and it got reported. Um, so this is one downside of uninterpretability is that really weird effects can happen and you don't see them coming and it's not always even easy to explain why they happened. Yeah? [inaudible]. Ah, the question is what happens if you did translate from Irish? I suppose that's the part where Google tries to autodetect the language, maybe it thinks that ag ag ag is more like Irish than Somali, [LAUGHTER] I imagine if you did put Irish to English, there's probably more, uh, training data for Irish to English. So maybe it wouldn't be so Bible-focused. Um, yeah, and there's a lot of examples of these online where you do different kinds of nonsense syllables in different languages. So there's a lot of, uh, challenges remaining in NMT. And, uh, the research continues. So NMT, I think remains one of the flagship tasks for NLP Deep Learning. In fact, NMT research has pioneered many of the successful innovations of NLP Deep Learning in general. Uh, so today in 2019, uh, NMT research continues to thrive, there's still many, many papers, uh, published all the time on NMT. And in fact, uh, researchers have found lots of improvements to the fairly vanilla seq2seq models that I've shown you today. Uh, but in fact, there is one improvement that is so integral to seq2seq that you could regard it as the new vanilla. And that's the improvement we're going to learn about today, and it's called attention. Okay. So section three is on attention. What is attention? First, I'm going to motivate why we need this thing called attention. So let's look at this diagram that we saw before of sequence-to-sequence. And remember when we assumed that this, uh, encoding of the source sentence, the, t he one in the orange box is going to represent the whole sentence. Uh, can anyone volunteer a problem you can see with this architecture? In particular perhaps, a problem with this idea that that single vector is the encoding of the source sentence. Yeah? [inaudible] Okay, so the answer is something like, um, you're only looking at one word, you mean like the last word in the source sentence? And you're not seeing more information. Yeah some- it's, it's something like that. Any other ideas? Yep. We might have lost information in the beginning of the sentence by the time you get to the end. Yeah. You might have lost information from [NOISE] the beginning of the sentence by, by the time you get to the end, especially if it was longer than four words. Right. I think these are different ways of saying a similar idea [NOISE] which is that we have a kind of informational bottleneck. Uh, we're forcing all of the information about the source sentence to be captured in this single vector because that's the only thing that gets given to the decoder. If some information about source sentence isn't in our vector, then there's no way the decoder is gonna be able to translate it correctly. So this is the, yeah, this is an informational bottleneck. [NOISE] It's putting kind of too much pressure on this single vector to be a good representation [NOISE] of the encoder. So this is the motivation for attention. Attention is a neural technique and it provides a solution to the bottleneck problem. The core idea is that on each step [NOISE] of the decoder, you're gonna use a direct connection to the encoder to focus on a particular part of the source sequence. So first I'm gonna show you what attention is via a diagram so that's kind of an intuitive explanation. And then I'm gonna show you the equations later. So here's how seq- sequence-to-sequence with attention works. So on the first step of our decoder, uh, we have our first decoder hidden state. So what we do, is we take the dot-product between that decoder hidden state and the first [NOISE] encoder hidden state. And then we get something called an attention score which I'm representing by a dot. So that's a scalar. [NOISE] And in fact, we take the dot-product between the decoder hidden state and all of the encoder hidden states. So this means that we get one attention score or one scalar for each of these, uh, source words effectively. So next what we do, is we take those four number scores and we apply the softmax, uh, distribution, uh, the softmax function to them and then we get a probability distribution. So here, I'm going to represent that probability distribution as a bar chart. Um, and we call this the attention distribution and this one sums up to 1. So here, you can see that most of the probability mass is on the first word. And that kinda makes sense because our first word essentially means "he" and, uh, were gonna be producing the word "he" first in our target sentence. So once we've got this attention distribution, uh, we're going to use it to produce something called the attention output. So the idea is that the attention output is a weighted sum of the encoder hidden states and the weighting is the attention distribution. So I've got these dotted arrows that go from the attention distribution to the attention output, probably there should be dotted arrows also from the encoder RNN but that's hard to depict. [NOISE] But the idea is that you're summing up these encoder RNN, uh, hidden states, [NOISE] but you're gonna weight each one according to how much attention distribution it has on them. So this means that your attention output which is a single vector is going to be mostly containing information from the hidden states that had high attention. In this case, it's gonna be mostly information from the first hidden state. So after you do this, you're going to use the attention output to influence your prediction of the next word. So what you usually do is you concatenate the attention output with your decoder hidden state and then, uh, use that kind of concatenated pair in the way you would have used the decoder [NOISE] hidden state alone before. So that way you can get your probability distribution, uh, y hat 1 of what's coming next. So as before, we can use that to sample your next word. [NOISE] So on the next step, you just do the same thing again. You've got your second decoder hidden state. Again, you take dot-product with all of the encoder hidden states. You take softmax over that to the get attention distribution. And here, you can see the attention distribution is different. We're putting more attention on, uh, the, the word entarté because we're about to produce the word hit. Uh, but we're also attending a little bit to the second word a because that's telling us that hit is a past tense. So a cool thing that's happening here is we're getting [NOISE] a soft alignment. If you remember when we looked at alignment in SMT systems, it was mostly this, uh, hard binary thing with on or off, either these words are aligned or they're not. Here, you have a much more flexible soft notion of alignments where, uh, each word kind of has a distribution over the corresponding words in the source sentence. So another thing to note kind of a side note, is that sometimes, uh, we take the attention output from the previous hidden state, uh, and we kind of feed it into the decoder again along with the usual word. So that would mean you take the attention output from the first step and kind of concatenate it to the word vector for he and then use it in the decoder. Uh, the reason for this is sometimes it's useful to have this, uh, information from the, the attention on the previous step on the next step. So I'm telling you this because this is something we do in Assignment 4 and it's a fairly common technique but also sometimes people don't do it. Okay. So, um, the theory is, that you just do this attention, uh, computation on every step. And on each step, you're going to be attending to different things. So in our example on this third step, we look at m' which means me when we produce me and then on the last three [NOISE] we're probably mostly just gonna be looking at this, uh, fertile word entarté to produce hit me with a pie. [NOISE] I'm gonna keep going because we don't have a lot of time. Uh, so here are the equations to describe attention. Uh, I think it's probably easier to look at these in your own time later rather than look at them in the lecture now. But these are the equations that essentially say the same thing as what the diagram just said. So you have your encoder hidden states h_1 up to h_N. And then on timestep t of the decoder, we also have a decoder hidden state, uh, S_t. So we're gonna get the attention score which we're gonna call et by taking the dot-product of your decoder hidden state with each of the encoder hidden states. [NOISE] And that gives you, uh, a vector of same length as the, uh, encoder [NOISE] sentence because you've got one score per source word. Next you take softmax over these scores to get attention distribution that sums up to 1, and we call that alpha. And then you use alpha to take a weighted sum of the encoder hidden states and that gives you your attention output. So [NOISE] the attention output which we call a is a vector that's the same size as your encoder hidden states. Lastly, you take your attention output a and then you, [NOISE] uh, concatenate it with your decoder hidden states and then proceed with that as you were taught before in the no attention model. So attention, if it's not clear, it's pretty cool. It has a number of advantages. So one advantage is that attention just significantly improves NMT performance. And the main reason why it improves it, is because it turns out it's super useful to allow the decoder [NOISE] to focus on certain parts of the source sentence when it's translating. And you can see why this makes sense, right? Because there's a very natural notion of alignment, and if you can focus on the specific word or words you're translating, you can probably do a better job. Another reason why attention is cool is that [NOISE] it solves the bottleneck problem. Uh, we were noting that the problem with having a single vector that has to represent the entire source sentence [NOISE] and that's the only way information can pass from encoder to decoder means that if that encoding isn't very good then, uh, you're not gonna do well. So by contrast in, uh, with attention, the decoder can look directly at the encoder and the source sentence and translate without the bottleneck. [NOISE] Another great thing about attention is that it helps with the vanishing gradient problem, especially if your sentences are quite long. Uh, the reason why attention helps is because you have these direct connections between the decoder and the encoder, kind of over many time steps. So it's like a shortcut connection. And just as we learned last time about, uh, skip connections being [NOISE] useful for reducing vanishing gradient. Here it's the same notion. We have these, uh, long distance [NOISE] direct connections that help the gradients flow better. Another great thing about attention is it provides some interpretability. Uh, if you look at the attention distribution, often you've produced your translation. Uh, you can see what the decoder was focusing on on each step. So for example if we run our system and we translate our, our running example here, then we can produce a plot, kind of like this that shows the attention distribution. So here, dark means high attention and white means low attention. So you might see something like this where, um, it was, it was focusing on the different words and different steps. And this is basically the same kind of plot that we had earlier with a hard notion of alignment, uh, in SNT except that we, uh, we have more flexibility to have a more soft version of alignment like for example when we produce the English word hit, perhaps we were mostly looking at entarte, but we're also looking at a little bit of A. So this, uh, means that we're getting, uh, alignment for free. And the reason I say for free is because when you remember the SNT systems, the whole point there is that you had to learn an alignment system deliberately and separately. You had to define the notion of alignment, you had to define the model of calculating, what the probability of different alignments were and train it. Whereas here, we never told the NMT system about alignments. We never explicitly trained an alignment system. We never had a loss function that tells you how good your alignment was. We just gave the NMT system the apparatus to do something like alignments and told it to maximize the, uh, the cross-entropy loss for doing machine translation. And then the network just learned alignment by itself. I think this is the coolest thing about attention, is that it's learned some structure in a somewhat unsupervised way. Okay, so in the last few minutes, I'm just going to, uh, generalize the notion of attention. Because it turns out that attention is actually a very general, uh, deep learning technique that you can apply in lots of different circumstances. So you've seen that attention is a great way to improve the sequence-to-sequence model for MT, but you can actually use attention for other architectures that aren't seq2seq and also tasks that aren't MT. So to understand this, I'm going to somewhat redefine attention to a more general definition. So here's our more general definition. Suppose you have a set of values, each of which is a vector, and you also have a single vector in which you're calling the query. Then attention is a way, uh, to compute a weighted sum of the values. But the way you weight it is dependent on the query. [NOISE] So we often phrase this, uh, as saying that the query is attending to the values. The idea being that you have all this information that's in the values and the query is somehow determining how it's going to pay attention to the values. So for example in seq2seq, uh, the decoder hidden state is the query. Uh the decoder hidden state on a particular time step is the query and is attending to all the encoder hidden states which are the values. All right, here's our definition again. So here's a way to kind of understand this intuitively, two alternative ways. One is to think of it like this. You could think of it as the weighted sum is like a selective summary of the information in the values. And I say selective because your choice of how much you choose to draw from each value depends on the attention distribution. Uh, so the distribution, uh, depends on the query. So the query is determining how much you're going to select from different, uh, values. And this is kind of similar to LSTM that learned about earlier this week. LSTMs rule based on the idea of a gate that, uh, [NOISE] that defines how much information shou- should [NOISE] come from different elements. And the gate depends on the context. So the strength of LSTMs came from the idea that based on the context, you decide where you're going to draw information from. And this is kind of like the same idea. The second way to think about attention is you could say that it's a way to obtain a fixed-size representation from an arbitrary set of representations. So when I say arbitrary sets, I'm saying we have this set of vectors called the values, right? And you could have 10 values. You could have 100 values. You can have, uh, any [NOISE] arbitrary number of these vectors. But attention gives you a way to get a single vector, um, summary of that which is the attention output, uh, using your query. Okay, uh, so the last thing, uh, is that there's actually several variants of attention and this is something were are going to look at a little in Assignment 4. So in our more general setting, we've seen that we have some values in the query. Doing attention always involves computing the attention [NOISE] scores, and then you apply softmax to get the attention distribution. And then you use that attention distribution to take a weighted sum. So this is, uh, always the outline of how attention works. The part that can be different is this, uh, number one. There are multiple ways you can compute the scores. So, uh, last slide, here all the different ways you can repeat the scores. So the first one which you've already seen today is basic dot-product attention. And the idea here is that [NOISE] the score for a particular, a particular value, HI, is just the dot-product of the query and that particular value. [NOISE] And, uh, in particular this assumes that the size of your query vector and the size of your value vector has to be the same because you're taking dot-product. [NOISE] Another, uh, version of, uh, attention is called multiplicative attention. And here, the idea is that the score of your, uh, value HI, is going to be this, uh, bi-linear function of your query and that value. So in particular, we're pushing this weight matrix in the middle and that's a learnable parameter. You're learning the best way matric- ma- weight matrix in order to get the scores, the attention scores that are useful. The last one is called additive attention. So what's happening here is that the score of the value HI is, uh, you get it by applying a linear transformation to both the value and the query and then you add them together. And then you put them through a non-linearity like tanh. And then lastly, uh, you take that vector and you take the dot-product with a weight vector to give you a single number that is the score. [NOISE] So here, you've got two different weight matrices and [NOISE] also a weight vector which are the learnable parameters. One thing that's different here is that there's kind of an additional hyperparameter, which is the attention dimensionality. So [NOISE] that's kind of, uh, the, I think it's the heights of the W1 and W2 and this is the length of V, right? You can choose what size that dimension is. It's kind of like a hidden layer in the computation. So, um, you can decide how big you want that intermediate representation to be. Okay, so I'm not going to tell you any more about that because that's actually one of the questions in the assignment, uh, Assignment 4 is to think about the relative advantages and disadvantages of these models. [NOISE] Okay. So here's a summary of today. [NOISE] It really is the last slide, [BACKGROUND] second last, last time, but this was the last slide. [BACKGROUND] So we learned about the history of MT. [NOISE] We learned about how in 2014 [NOISE] Neural MT revolutionized MT. [NOISE] We learned about how sequence-to-sequence is the right architecture for NMT and it uses two RNNs. And lastly, we learned about how attention [NOISE] is a way to focus on particular parts of the input. All right, thanks.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_20_Future_of_NLP_Deep_Learning.txt
Let's get started. So welcome to the very final lecture of the class. I hope you're all surviving the last week and, uh, wrapping up your projects. So today we're going to be hearing about the future of NLP and deep learning. Uh, so Chris is still traveling and today we're going to be having Kevin Clark, who's one of the PhD students in the lab, uh, in the NLP lab, and he was also one of the head TAs for the class last year. So he's very familiar with the class as a whole. Um, so, take it away Kevin. Okay. Thanks, Abby. Um, yeah, it's great to be back after being a TA last year. Um, I'm really excited today to be talking about the future of deep learning and NLP. Um, obviously, trying to forecast the future, um, for deep learning or anything in that space is really difficult because the field is changing super quickly. Um, so as one reference point, um, let's look at what did deep learning for NLP, um, look like about five years ago. And really, a lot of ideas that are now considered to be pretty core techniques, um, when we think of deep learning and NLP, um, didn't even exist back then. Um, so things you learned in this class like Seq2Seq, attention mechanism, um, large-scale, reading comprehension, uh, even frameworks such as TensorFlow or Pytorch, um, didn't exist. And, uh, the point I want to make with this is that, um, because of this it's really difficult to, to look into the future and say, okay, what are things going to be like? Um, what I think we can do though is look at, um, areas that right now are really sort of taking off, um, so areas in which, um, there's a lot, been a lot of recent success and kind of, uh, project from that, that, those same areas will likely be important in the future. Um, and in this talk I'm going to be mostly focusing on one key idea of wh- key idea which is the idea of leveraging unlabeled examples when training our NLP systems. So I'll be talking a bit about doing that for machine translation, um, both in improving the quality of translation and even in doing a translation in an unsupervised way. So that means you don't have, um, paired sentences, uh, with, with their translations. Um, you try to learn a translation model only from a monolingual corpus. Um, the second thing I'll be talking a little bit about is, uh, OpenAI's GPT-2, um, and in general this phenomenon of really scaling up, um, deep learning models. Um, I know you saw a little bit of this in the lecture on contextual representations, but this, but this will be a little bit more in depth. Um, and I think, um, these new developments in NLP have had some, um, pretty big, uh, impacts in terms of, uh, more broadly kind of beyond even the technology we're using, and in particular, I mean, starting to raise more and more concerns about the social impact of NLP, um, both, um, in what our models can do and also in kind of plans of what, where people are looking to apply these models, um, and I think that really has some risks associated with it, um, in terms of security also in terms of areas like bias. Um, I'm also gonna talk a bit about future areas of research, um, these are mostly research areas now that are, um, over the past year have really kind of developed into promising areas and I expect they will continue to be important in the future. Okay, um, to start with, I wanna ask this question, why has deep learning been so successful recently? Um, I like this comic, um, here there's a statistical learning person, um, and they've got some really complicated, um, well-motivated, uh, method for doing, um, the task they care about, and then the neural net person just says, er, stack more layers. Um, so, so the point I want to make here is that, um, deep learning has not been successful recently because it's more theoretically motivated or it's more sophisticated than previous techniques, um. In fact I would say that actually a lot of, um, older statistical methods have more of a theoretical underpinning than some of the tricks we do in deep learning. Um, really the thing that makes deep learning so successful in recent years has been its ability to scale, right. So neural nets, as we increase the size of the data, as we increase the size of the models, um, they get a really big boost in accuracy, in ways other approaches do not. And, um, if you look to the '80s and '90s, um, there was actually plenty of research in neural nets going on, um. But it hadn't, doesn't have a hype around it that it does now and that seems likely to be because, um, in the past there wasn't, um, the same resources in terms of computers, in terms of data and, um, only now after we've reached sort of an inflection point where we can really take advantage of scale in our deep learning models and we started to see it become, um, a really successful paradigm for machine learning. Um, if we look at big, uh, deep learning success stories, um, I think, uh, you can see kind of this idea play out, right? So here are three of what are arguably the most famous successes of deep learning, right. So there's image recognition, where before, people used very highly engineered, um, features to classify images and now neural nets are much superior, um, to those methods. Um, machine translation has really closed the gap between, um, phrase-based systems and human quality translation, so this is widely used in things like Google Translate and the quality has actually gotten a lot better over the past five years. Um, another example that had a lot of hype around it is game-playing, so, um, there's been work on Atari games, there's been AlphaGo, uh, more recently there's been AlphaStar and OpenAI Five. Um, if you look at all three of these cases underlying these successes is really large amounts of data, right. So for ImageNet, um, for image recognition, um, there is the ImageNet dataset which has 14 million images, uh, machine translation datasets often have millions of examples. Um, for game playing you can actually generate as much training data as you want essentially, um, just by running your agent, um, within the game, um, over and over again. Um, so if we, if we look to NLP, um, the story is quite a bit different for a lot of tasks, um, right. So if you look at even pretty core kind of popular tasks, to say, reading comprehension in English, um, datasets like SQuAD are in the order of like 100,000 examples which is considerably less than the millions or tens of millions of examples, um, that these previous, um, successes have, have benefited from. Um, and that's of course only for English, right. Um, there are, um, thousands of other languages and this is I think a problem with NLP data as it exists today. Um, the vast majority of data is in English, um, when in reality fewer than 10% of the world's population, um, speak English as their first language. Um, so these problems with small datasets are only compounded if you look at, um, the full spectrum of languages, um, that exist. Um, so, as what do we do, uh, when we're limited by this data, but we want to take advantage of deep learning scale and train the biggest models we can. Um, the popular solution, um, that's especially had recent success is using unlabeled data, um, because unlike labeled data, unlabeled data is very easy to acquire for language. Um, you can just go to the Internet, you can go to books, you can get lots of text, um, whereas labeled data usually requires at the least crowdsourcing examples. Um, in some cases you even require someone who's an expert in something like linguistics, um, to, to annotate that data. Okay, so, um, this first part of the talk is going to be applying this idea of leveraging unlabeled data to improve our NLP models, um, to the task of machine translation. Um, so let's talk about machine translation data. Um, it is true that there do exist quite large datasets for machine translation. Um, those datasets don't exist because NLP researchers have annotated texts for the purpose of training their models, right. They exist because, er, in various settings, translation is done just because it's useful, so for example, proceedings of the European Parliament, um, proceedings of the United Nations, um, some, uh, news sites, they translate their articles into many languages. Um, so really, the machine translation data we use to train our models are often more of byproducts of existing cases where translation is wanted rather than, um, kind of a full sampling of the sort of text we see in the world. Um, so that means number one, it's quite limited in domain, right. So it's not easy to find translated tweets, um, unless you happen to work for Twitter. Um, in addition to that, um, there's limitations in terms of the languages that are covered, right. So some languages, say European languages, there's a lot of translation data, um, for other languages there's much less. Um, so in these settings where we want to work on a different domain or where we want to work with a low resource language, um, we're limited by labeled data, um, but what we can do is pretty easily find unlabeled data. Um, so it's actually a pretty solved problem, um, maybe not 100%, but we can with good accuracy look at some text and decide what language it's in and train a classifier to do that. Um, so this means it's really easy to find data in any language you care about because you can just go on the web and essentially search for data in that language and acquire a large corpus of monolingual data. Okay, um, I'm now going into the first approach, um, I'm going to talk about on using unlabeled data to improve machine translation models. Um, this technique is called pre-training and it's really reminiscent of ideas like, um, ELMo. Um, the idea is to pre-train by doing language modeling. So if we have, um, two languages we're interested in translating, um, from one end to the other, we'll collect large datasets for both of those languages and then we can train, uh, two language models, one each on that data and then, um, we can use those, uh, pre-trained language models as initialization for a machine translation system. Um, so the encoder will get initialized with the weights of the language model trained on the source side language, um, the decoder will get initialized with weights trained on the target size language, uh, and this will, um, improve the performance of your model because during this pre-training, um, we hope that our language models will be learning useful information such as, you know, the meaning of words or, um, uh, the kind of structure of the language, um, they are processing, um, and this can, uh, down the line help the machine translation model, um, when we fine tune it. Um, let me pause here and ask if there are any questions, and just in general, feel, feel free to ask questions throughout this talk. Okay. So, so here is a plot showing some results of this pre-training technique. Um, so this is English to German translation. Uh, the x-axis is how much training data, as in unsupervised training data, um, you provide these models, but of course they also have large amounts of monolingual data for this pre-training step. And you can see that this works pretty well, right? So you've got about two blue points, um, increase in performance, so that's this red line above the blue line, um, when doing this pre-training technique. And not too surprisingly, this gain is especially large when the amount of labeled data is small. Um, there is a problem with, uh, pre-training which I want to address, which is that, uh, in pre-training, you have these two separate language models and there's never really any interaction between the two, um, when you're running them on the unlabeled corpus. Um, so here's a simple technique, um, that tries to solve this problem and it's called self-training. Um, the idea is given a sentence from our monolingual corpus, so in this case, "I traveled to Belgium," that's an English sentence. Um, we won't have a human provided translation for this sentence, uh, but what we can do is we can run our machine translation model, and we'll get a translation in the target language. Um, since this is from a machine learning model it won't be perfect, uh, but we can hope that maybe our model can still learn from this kind of noisy labeled example, right? So we, we treat, um, our original monolingual sentence and it's machine-provided translation as though it were a human-provided translation and, uh, train our machine learning model as normal on this example. Um, I think this seems pretty strange actually as- as a method when you first see it because it seems really circular, right? So if you look at this, um, the, uh, translation that the model is being trained to produce is actually exactly what it already produces to begin with, right, because, um, this translation came from our model in the first place. Um, so actually in practice, this is not a technique that's very widely used due to this problem, um, but it motivates another technique called back-translation. And this technique is really a very popular, um, solution to that problem, and it's the method, um, that has had a lot of success in using unlabeled data for translation. So here's the approach rather than only having our translation system that goes from source language to target language, um, we're also going to train a model that goes from our target language to our source language. And so in this case, if, if at the end of the day we want a French to English model, um, we're gonna start by actually training an English to French model. And then we can do something that's a lot like self-labeling. So we take a English sentence. We run our English to French model and translate it. The difference to what we did before is that we're actually going to switch the source and target side. So now in this case the French sentence is the source sequence. Uh, the target sequence is, um, our original English sentence that came from monolingual corpora. And now we're training the language, uh, the machine translation system that goes the other direction so that goes French to English. Um, so, so why do we think this will work better? Um, number one, um, there's no longer this kind of circularity to the training because what the model is being trained on is the output of a completely different model. Um, another thing that I think is pretty crucial here is that, um, the translations, the model is trained to produce. So the things that the decoder is actually learning to generate are never bad translations, right? So if you look at this example, the target sequence for our French to English model, I traveled to Belgium, um, that originally came from a monolingual corpus. Um, so I think intuitively this makes sense is that if we want to train a good translation model, um, it's probably okay to expose it to noisy inputs. So we expose it to the output of a system that's English to French, it might not be perfect. Um, but what we don't want to do is um, expose it to poor target sequences because then it won't learn how to generate in that language effectively. Any questions on back-translation before I get to results? Um, sure. [BACKGROUND] So this is assuming we have a large corpus of unlabeled data and we want to be using it to help our translation model. Does that, does that make sense? Um, maybe you could clarify the question. [BACKGROUND] Yeah, that's right. So we have a big corpus of English which includes the sentence, "I traveled to Belgium," and we don't know the translations but we'd still like to use this data. Yeah, another question. [BACKGROUND] Yeah, so that's a good question is how do you avoid both the models let's say sort of blowing up and producing garbage? And then they're just feeding garbage to each other. The answer is that there is some amount of labeled data here as well. So on unlabeled data you do this, but on labeled data, you do standard training, and that way you avoid, you, you make sure you kind of keep the models on track because they still have to fit to the labeled data. Yeah, another question. How do you schedule the training of the two models? Yeah, that is a good question. And I think that's basically almost like a hyper-parameter you can tweak. So I think a pretty common thing to do is first, train two models only on labeled data. Then label, um, so then do back-translation over a large corpus and kind of repeat that process over and over again. So each iteration, you train on the label data, label some unlabeled data and now you have more data to work with. But I think there'd be many kinds of scheduling that would be effective here. Okay. Another question. I'm curious as to the evaluation, considering if you have a very good French to English model, you could try to look up, or contest if you have a good French to English model, you could try to look up the original source and see if it matches. Yeah, I'm not, I'm not quite sure. Are you suggesting going like English to French to English and seeing if? I see, yeah, yeah, that's a really interesting idea. And we're actually going to talk a little bit about this sort of, it's called cycle consistency, this idea later in this talk. Okay, I'm going to move on to the results. So, so here's the method for using unlabeled data to improve translation. How well does it do? Um, the answer is that the improvements are at least to me, they were surprisingly extremely good, right? So, um, this is for English to German translation. This is from some work by Facebook, so they used 5 million labeled sentence pairs. But they also used 230 monolingual sentences, so sentences without translations. And you can see that compared to previous state of the art, they get six BLEU points improvement which, um, if you compare it to most previous research and machine tran- machine translation is a really big gain, right? So even something like the invention of the transformer which most people would consider to be a really significant research development in NLP, that improved over prior work by about 2.5 BLEU points. And here without doing any sort of fancy model design just by using way more data, um, we get actually much larger improvements. Okay. So an interesting question to think about, um, is suppose we only have our monolingual corpora. So we don't have any sentences that had been human translated. We just have sentences in two languages. Um, so the scenario you can sort of imagine is suppose, um, an alien comes down and, um, starts talking to you and it's a weird alien language, um, and it talks a lot, would you eventually be able to translate what it's saying to English, um, just by having a really large amount of data? Um, so I'm going to start with, um, a simpler task than full-on translating when you only have unlabeled sentences. Um, instead of doing sentence to sentence translation, let's start by only worrying about word to word translation. So the goal here is given a word in one language, find its translation but without using any labeled data. Um, and the method, the method we're going to use to try to solve this task is called, uh, cross-lingual embeddings. Um, so the goal is to learn, uh, word vectors for words in both languages, and we'd like those word vectors to have all the nice properties you've already learned about word vectors having, um, but we also want word vectors for a particular language, um, to be close to the word vector of its translation. Um, so I'm not sure if it's visible in this figure but this fis- figure shows a large number of English and I think German words and you can see that, um, uh, the each English word has its corresponding German word, um, nearby to it in its embedding space. So if we learn embeddings like this then it's pretty easy to do word to word translation. Um, we just pick an English word, we find the nearest, uh, German word in this joint embedding space and that will give us a translation for the English word. Um, our key method for or the key assumption that we're going to be using to solve this is that, um, th- even though if you run word2vec twice you'll get really different embeddings. Um, the structure of that embedding space has a lot of regularity to it, and we can take advantage of that regularity, um, to help find when, um, an alignment between those embedding spaces. So to be kind of more concrete here. Here is a picture of two sets of word embeddings. So in red, we have, um, English words, in, uh, blue we have Italian words, and although, um, the vector spaces right now look very different to each other, um, you can see that they have a really similar structure, right? So you'd imagine distances are kind of similar that the distance from, uh, cat and feline in the, um, English embedding space should be pretty similar to the distance between gatto and felino in the, um, Italian space. Um, this kind of motivates an algorithm for learning these cross-lingual embeddings. Um, so here's the idea. What we're going to try to do is learn what's essentially a rotation such that we can transform, um, our set of English embeddings so that they match up with our Italian embe- embeddings. So mathematically, what this means is we're gonna learn a matrix W such that if we take let's say, uh, the word vector for cat in English and we multiply it by W, um, we end up with the vector for gatto in Spanish or Italian, um, and a detail here is that, um, we're going to constrain W to be orthogonal, um, and what that means geometrically is just that W is only going to be doing a rotation to the, uh, vectors, um, in X. It's not going to be doing some other weirder transformation. So this is our goal is to learn this W. Um, next I'm gonna talk about, talking about how actually do we learn this W. Um, and there's actually a bunch of techniques for learning this W matrix, um, but, um, here is one of them that I think is quite clever is called adversarial training. Um, so it works as follows, is in addition to trying to learn this W matrix, we're also going to be trying to learn a model that, uh, is called a discriminator, and what it'll do is take a vector and it will try to predict, is that vector originally, um, an English word embedding or is it originally an Italian word embedding? Um, in other words, if you think about, um, the diagram, what we're asking our discriminator to do is, uh, it's given one of these points and it's trying to predict is it basically a red point so an English word originally, or is it a blue point? Um, so if we have no W matrix and this is a really easy task for the discriminator because, um, the, uh, word embeddings for English and Italian are clearly separated. Um, however, if we learn a W matrix that succeeds in aligning all these embeddings on top of each other, then our discriminator will never do a good job, right. We can imagine it'll never really do better than 50%, um, because given a vector for say cat, it won't know is that the vector for cat that's been transformed by W or is it actually the vector for gatto? Um, because in this case those two vectors are aligned so they are on top of each other. Um, so, um, during training, you first, um, you alternate between training the discriminator a little bit which means making sure it's as good as possible at distinguishing the English from Italian words and then you train the W and the goal for training W is to, uh, essentially confuse the discriminator as much as possible. Um, so you want to have a situation where, um, you can't, um, with this machine learning model, figure out if a word embedding actually, um, was, um, originally from English or if it's an Italian word vector. Um, and so at the end of the day you have, you have vectors that are kind of aligned with each other. Um, any questions about this approach? Okay. Um, he- there's a link to a paper with more details. There's actually kind of a range of other tricks you can do, um, but this is kind of a key idea. Um, okay. So that was doing word to word unsupervised translation. Um, how do we do full sentence to sentence translation? Um, so we're going to use, um, a standard sort of seq2seq model, um, without even an attention mechanism. Um, there's one change to the standard seq2seq model going on here which is that, um, we're going to use the same encoder and decoder, uh, regardless of the input and output languages. So you can see, um, in this example, um, we could give the encoder an English sentence, we could also give it a French sentence and it'll have these cross-lingual embeddings. So it'll have vector representations for English words and French words which means it can handle sort of any input. Um, for the decoder, we need to give it some information about what language is it supposed to generate in. Is it going to generate in French or English? Um, so the way that is done is by, uh, feeding in a special token which here is Fr in brack- brackets to represent French that tells the model, okay, you should generate in French now. Um, here in this figure it's only French, but you could imagine also feeding this model, uh, English in brackets, and then that'll tell it to, uh, generate English. And one thing that you can see is that you could use this sort of model to g enerate, do go from English to French. You could also use this model as an auto-encoder, right. So, uh, at the bottom, um, it's taking in a French sentence as input and it's just generating French as output which here means just reproducing the original input sequence. Um, so just a small change to standard seq2seq. Here's how we're going to train the seq2seq model. Um, there's going to be two training objectives, um, and I'll explain sort of why they're, uh, present in this model in just a few slides. For now let's just say what they are. So the first one is, um, called a de-noising autoencoder. Um, what we're going to train our model to do in this case is take a, uh, sentence. So, um, and here it's going to be an English sentence but it could also be a French sentence. Um, we're going to scramble up the words a little bit, and then we're going to ask the model to, uh, de-noise that sentence which in other words means regenerating what the sentence actually was before it was scrambled. And, uh, maybe one idea of why this would be a useful training objective is that, uh, since we have an encoder-decoder without atten- attention, the encoder is converting the entirety of the source sentence into a single vector, what an auto-encoder does is ensure that that vector contains all the information about the sentence such that we are able to recover what the original sentence was, um, from the vector produced by the encoder. Um, so that was objective 1. Training objective 2 is now we're actually going to be trying to do a translation, um, but, um, as before, we're going to be using this back-translation idea. So remember, we only have unlabeled sentences, we don't have any human-provided translations, um, but what we can still do is, given, a, um, let's say an English sentence or let's say a French sentence, given a French sentence, we can translate it to English, um, using our model in its current state, uh, and then we can ask that model to translate from English or translate that- yeah, translate that English back into French. Um, so what you can imagine is in this setting, um, the input sequence is going to be somewhat messed up because it's the output of our imperfect machine learning model. So here the input sequence is just "I am student," um, a word has been dropped, but, um, we're now gonna train it to, even with this kind of bad input, to reproduce the original, um, French sentence, um, from our, uh, corpus of- of monolingual, um, French text. [NOISE] Um, let me- let me pause here actually and ask for questions. Sure. [NOISE] [inaudible] What if, um, the reason you have this orthogonality constraint for your words to be word embedding, is it to avoid overfitting? Have you tried to take that off, and you know, see what [inaudible] Yeah. That's a good question. Um, so this is going back to earlier when there was a word-word translation. Why would we constrain that W matrix to be orthogonal? Um, essentially, that's right. It's to avoid overfitting and in particular, it's making this assumption that our embedding spaces are so similar that there's actually just a rotation that distinguishes, um, our word vectors in English versus our word vectors in Italian. Um, I think there has been, um, there have been results that don't include that orthogonality constraint, and I think it slightly hurts performance to not have that in there. [NOISE] Okay. Um, so- so continuing with, um, unsupervised machine translation, um, I- I gave a training method. I didn't quite explain why it would work, so- so, um, here is some more intuition for- for this idea. Um, so remember, um, we're going to initialize our machine translation model with these cross-lingual embeddings, which mean the English and French word should look close to identically. Um, we're also using the shared, um, encoder. Um, so that means if you think about it, um, at the top, we have just, a auto-encoding objective and we can certainly believe that our model can learn this. Um, it's a pretty simple task. Um, now imagine we're giving our model a French sentence as input instead. Um, since the, uh, embeddings are going to look pretty similar, and since the encoder is the same, um, it's pretty likely that the model's representation of this French sentence should actually be very similar to the representation of the English sentence. Um, so when this representation is passed into the decoder, um, we can hope that we'll get the same output as before. Um, um, so here's like sort of as a starting point. We- we can hope that our model, um, already is able to have some translation capability. [NOISE] Um, another way of thinking about this is that what we really want our model to do is to be able to encode a sentence, such that the representation, um, is sort of a universal kind of Interlingua. So a universal, um, uh, universal representation of that sentence that doesn't, uh, that's not specific to the language. And so- so here's kind of a picture that's trying to get at this. So our autoencoder, um, and our, um, here in our back-translation example, um, here, the target sequence is the same. [NOISE] Um, so what that essentially means is that the vectors for the English sentence and the French sentence, um, are going to be trained to be the same, um, right? Because if they are different, our, uh, decoder would be generating different, uh, outputs on these two examples. Um, so here- this is just another sort of intuition is that what our model is trying to learn here is kind of a way of encoding the information of a sentence in a vector, um, but in a way that is language-agnostic. Um, any more questions about, uh, unsupervised machine translation? Okay. Um, so going on to results of this approach, um, here, the horizontal lines are, um, the results of an unsupervised machine translation model. Um, the lines that go up are for a supervised machine translation model, um, as we give it more and more data. Right? So unsurprisingly, um, given a large amount of supervised data, um, the supervised machine translation models work much better than the unsupervised machine translation model. Um, but, um, the unsupervised machine translation model, actually still does quite well. Um, so if you see it around 10,000 to 100,000 training examples, um, it actually does just as well or better than supervised translation, and I think that's a really promising result, uh, because if you think of, um, low-resource settings where there isn't much labeled examples, um, it suddenly becomes really nice that you can perform this well, um, without even needing to use a training set. Um, another thing kind of fun you can do with, an unsupervised machine translation model is attribute transfer. Um, so basically, you can, um, take, uh, collections of texts that, uh, split by any attribute you want. So for example, you could go on Twitter, look at hashtags to decide which tweets are annoyed and which tweets are relaxed, and then you can treat those two corpora as text as though they were two different languages, and you can train an unsupervised machine translation model, uh, to convert from one to the other. Uh, and you can see these examples, um, the model actually does a pretty good job of sort of minimally changing the sentence, kind of preserving a lot of that sentence's original semantics, um, such that the target attribute is changed. Um, I also wanna throw a little bit of cold water on this idea. So I do think it's really exciting and- and almost kind of mind-blowing that you can do this translation without labeled data. Um, certainly, right. It's really hard to imagine someone giving me a bunch of books in Italian and say, "Okay. We're in Italian," um, without, you know, teaching you how to specifically do the translation. Um, but, um, even though these methods show promise, um, mostly they have shown promise on languages that are quite closely related. So those previous results, those were all, um, some combination of English to French or English to German, um, or so on, and those languages are quite similar. [NOISE] Um, so if you look at, uh, a different language pair, let's say English to Turkish, where, um, the linguistics in those two languages are quite different, uh, these methods do still work to some extent, um, so they get around five BLEU points let's say, uh, but they don't work nearly as well, um, as they do in the f- uh, i- in the other settings, right? So there's still a huge gap to purely supervised learning. Um, right? So we're probably not, you know, quite at this stage where an alien could come down and it's sort of, no problem, let's use our unsupervised machine translation system, um, but I still think that's pretty exciting progress. Um, yeah. Question? Um, so what you're saying is that the genealogy of a language might need it to superimpose worse, right? Because my original thought was that if you took, for example, like Latin, which doesn't have a word for, you know, the modern classification of car, I thought that would do more poorly. But if- but, uh, basically, what I'm asking is, do you think the English maps better to Latin because they're both related, and worse to Turkish or is it the other way around? Um, I would expect English to map quite a lot better to Latin. And I think part of the issue here is that, um, the difficulty in translation I think is not really at the word level. So I mean that certainly is an issue that words exist in one language that don't exist in another, um, but I think actually, more substantial differences between language is at the level of like syntax, um, um, or you know, semantics, right? How ideas are expressed. Um, so- so I think I- I would expect Ital- Latin to have, you know, relatively similar syntax to English, um, compared to say Turkish, I imagine that is probably the bigger obstacle for unsupervised machine translation models. Um, I'm going to really quickly go into this last recent research paper which is basically taking BERT which, which you've learned about, um, correct? Yes. Okay. And making it cross-lingual. Um, so, um, here's what regular BERT is, right? We have a sequence of sentences in English. We're going to mask out some of the words. And we're going to ask BERT which is our transformer model, um, to essentially fill in the blanks and predict what were the words that were dropped out. Um, what actually has already been done by Google is training a multilingual BERT . So what they did essentially is concatenate, um, a whole bunch of corpora in different languages and then train one model um, doing using this masked LM objective um, on all of that text at once. And that's a publicly released model. Um, the, the new kind of extension to this that has recently been uh, proposed by Facebook is to actually combine this masked LM training objective um, with uh, translation. So what they do is sometimes give this model a in this case, a sequence in English and a sequence in uh, French. Um, drop out some of the words and just as before, ask the model to fill it in. And the motivation here is that, um, this will much better cause the model to understand the relation between these two languages. Because if you're trying to find a fill in a English word that's been dropped, uh, the best way to do it if you have a translation is look at the French side and try to find that word. Hopefully, that one hasn't been dropped as well. And then you can um, much more easily fill in the blank. And uh, this actually leads to very uh, substantial improvements in unsupervised machine translation. So just like BERT is used for other tasks in NLP, they basically take this cross-lingual BERT. They use it as initialization for a unsupervised machine translation system and they get, you know, really large gains on the order of 10 BLEU points um, such that the gap between unsupervised machine translation and the current supervised state of the art, um, is much smaller. Uh, so this is a pretty recent idea but I think it also shows promise in really improving the quality of translation through using unlabeled data. Um, although I guess yeah, I guess in this case with BERT they are using labeled translation data as well. Any, any questions about this? Okay. Um, so that is all I'm going to say about using unlabeled data for translation. The next part of this talk is about um, what happens if we really scale up these unsupervised language models. Um, so in particular I'm gonna talk about GPT-2 which is a new model by OpenAI. That's essentially a really giant language model and I think it has some interesting implications. So first of all, here's just the sizes of a bunch of different NLP models and, um, you know, maybe a couple years ago the, the standard sort of LSTM medium-size model was on the order of about 10 million parameters. Where 10- where a parameter is just a single weight let's say in the neural net um, ELMo and uh, GPT. So the original OpenAI paper before they did this GPT-2 and we're about 10 times bigger than that. Um, GPT-2 is about another order of magnitude bigger. Um, one kind of interesting comparison point here is that uh, GPT-2 which is 1.5 billion parameters, actually has more parameters than a honey bee brain has synapses. Um, so that sounds kind of impressive, right? You know honeybees are not the smartest of animals but they can still fly around and find nectar or whatever. Um, but yeah. Of course, this isn't really an apples to apples comparison, right? So a synapse and a weight in a neural net are really quite different. But I just think it's one kind of interesting milestone let's say in terms of model size um, that has been surpassed. [NOISE] Um, one thing to point out here is that um, this increasing scaling of deep learning is really a general trend uh, in all of machine learning so beyond NLP. So this plot is showing time on the x-axis and the y-axis is log scaled um, the amount of petaFLOPS used to train this model. Um, so what this means is that the trend at least currently is that there is exponential growth in how much compute power we're throwing at our machine learning models. I guess it is kind of unclear, you know, will exponential growth continue but certainly um, there's rapid growth in the size of our models. And it's leading to some really amazing results, right? So here are results not from language but for vision. Um, this is a generative adversarial network that's been trained on a lot of data and it's been trained on really large scales. So it's a big model kind of in-between the size of ELMo and BERT let's say. And uh, these photos here are actually productions of the model. So those aren't real photos. Those are things the model has just kind of hallucinated out of thin air. And at least to me they look essentially photo-realistic. There's also a website that um, is fun to look at it. If you're not- if you're interested which is, thispersondoesnotexist.com. So if you go there, you'll see a very convincing photo of a person but it's not a real photo. It's again like a hallucinated image produced by a GAN. We're also seeing really huge models being used for image recognition. So this is recent work by Google where they trained an image net model with half a billion parameters. So that's bigger than BERT but not as big as GPT-2. Um, this plot here is showing a log scaled number of parameters on the x-axis and then accuracy at ImageNet on the y-axis- axis and sort of unsurprisingly bigger models perform better. And there seems to actually be a pretty consistent trend here which is uh, accuracy is increasing with the log of the, the model size. Um, I wanna go into a little bit more detail, how is it possible that we can scale up models and train models at such a large extent. One answer is just better hardware. And in particular, um, there's a growing uh, number of companies that are developing hardware specifically for deep learning. So these are even more kind of constrained and the kind of operations they can do than a GPU, um but they do those operations even faster. So Google's Tensor Processing Units is one example. There are actually a bunch of other companies working on this idea. Um, the other way to scale up models is by taking advantage of parallelism and there's two kinds of parallelism that I want to talk about very briefly. So one is data parallelism. In this case, each of your, let's say GPUs, will have a copy of the model. And what you essentially do is split the mini-batch that you're training on across these different models. So if you have, let's say, 16 GPUs and each of them see a batch size of 32. You can aggregate the gradients of these 16 uh, uh, if you do a back-prop on these 16 GPUs and you end up with effectively a batch size of 512. So this allows you to train models much faster. Um, the other kind of parallelism that's growing in importance is model par- parallelism. Um, so eventually models get so big that they can't even fit on a single GPU and they can't even do a batch size of one. Um, in this case, you actually need to split up the model across multiple computers- multiple compute units. Um, and that's what's done for models kind of the size of, of let's say GPT-2. There are new frameworks such as Mesh-TensorFlow, um, which are basically designed to make this sort of model parallelism easier. Um, okay. So onto GPT-2, um, I know you already saw this a little bit in the contextualized uh, um, embeddings um, lecture but I'm going to go into some more depth here. [NOISE] So so essentially it's a really large transformer language model. Um, so there's nothing really kind of novel here in terms of new training algorithms or in terms of um, the loss function or anything like that. Um, the thing that makes it different from prior work is that it's just really really big. Uh, it's trained on a correspondingly huge amount of text. So it's trained on 40 gigabytes and that's roughly 10 times larger than previous uh, language models have been trained on. Um, when you have that size of dataset, um, the only way to get that much text is essentially to go to the web. Um, so one thing OpenAI put a quite a bit of effort into when they're developing this network was to ensure that that text was pretty high-quality. Um, and they did that in a kind of interesting way. They, they looked at Reddit which is this website where people uh, can vote on links. And then they said uh, if a link has a lot of votes then it's probably sort of a decent link. There's probably um, you know, reasonable text there for a model to learn. Um, okay, so if we have this super huge language model like GPT-2 on this question of what can you actually do with it, um, well obviously if you have a language model you can do language modelling with it. Uh, but one thing kind of interestingly interesting is that you can run this language model on er, existing benchmarks, um, for, for language modelling, um, and it gets state of the art perplexity on these benchmarks even though it never sees the training data for these benchmarks, right? So normally, if you want to say evaluate your language model on the Penn Treebank. You first train on the Penn Treebank and then you evaluate on this held-out set. Uh, in this case, uh, a GPT-2 just by virtue of having seen so much text and being such a large model, outperforms all these other uh, prior works even though it's not seeing that data. Um, on a bunch of different uh, language modelling benchmarks. Um, but there's a bunch of other interesting experiments that OpenAI ran with this language modeling and these were based on zero-shot learning. So zero-shot learning just means trying to do a task without ever training on it. And, uh, the way you can do this with a language model is by designing a prompt you feed into the language model and then have it just generate from there and hopefully it generates something relevant to the task you're trying to solve. So for example, for reading comprehension, what you can do is take the context paragraph, uh, concatenate the question to it and then add uh, a colon which is a way, I guess, of telling the model, ''Okay you should be producing an answer to this question,'' and then just have it generate text, um, and perhaps it'll generate something that is actually answering, um, the question and is, is paying attention to the context. [NOISE] Um, and similarly, for summarization, you can get the article then TL;DR and perhaps the model will produce the summary. Um, you can even do translation, where you give the model, um, some ex- a list of known English to French translations so you, sort of, prime it to tell it that it should be doing translation and then you give it the source sequence equals blank and have it just run and, um, perhaps it'll generate, um, the sequence in the target language. Um, okay. So so here's what the results look like. Um, for all of these, uh, the X-axis is, is log scaled model size and the Y-axis is accuracy, um, and the dotted lines basically correspond to, um, existing works on these tasks. Um, so for most of these tasks, um, GPT-2 is quite a bit below existing systems, um, but there's of course this big difference, right? Existing systems are trained specifically to do, um, whatever task they're being evaluated on, where GPT-2 is um, only trained to do language modeling and as it learns language modeling, it's sort of picking up on these other tasks. Um, so right. So for example, um, it does, uh, English to French machine translation, um, not as well as, uh, standard unsupervised machine translation which is those, uh, dotted lines, um, but it still, it still does quite well. And, um, one thing, kind of, interesting is the trend line, right, for almost all of these tasks. Um, performance is getting uh, much better as the model increases in size. [NOISE] Um, I think a particularly interesting, uh, one of these tasks is machine translation, right? So the question is, how can it be doing machine translation when all we're giving it as a bunch of web pages and those web pages are almost all in English and yet somehow it sort of magically picks up uh, a little bit of machine translation, right. So it's not a great model but it can still, um, you know, do a decent job in some cases. Um, and the answer is that, if you look at this giant corpus of English, occasionally, uh, within, within that corpus, you see examples of translations, right? So you see, um, a French idiom and its translation or a quote from someone who's French and then the translation in English. And, um, kind of, amazingly I think this big model, um, sees enough of these examples that it actually starts to learn how to generate French, um, even though that wasn't really, sort of, an intended part of its training. Um, another interesting, um, thing to dig a bit more into is its ability to do question answering. So uh, a simple baseline for question answering gets about 1% accuracy, GPT-2 barely does better at 4% accuracy. So this isn't, like, you know, super amazingly solved question answering, um, but, um, it's still pretty interesting in that, if you look at answers the model's most confident about, you can see that it sort of has learned some facts about the world, right. So it's learned that Charles Darwin wrote Origin of Species. Um, normally in the history of NLP, if you want to get, kind of, world knowledge into an NLP system, you'd need something like a big database of facts. And even though this is still, kind of, very early stages and that, um, there's still a huge gap between 4% accuracy and the, uh, you know, 70% or so that, uh, state of the art open domain question answering systems can do, um, it, it, um, it still can, uh, pick up some world knowledge just by reading a lot of text, um, without, kind of, explicitly having that knowledge put into the model. Um, any questions by the way on GPT-2 so far? Okay. So one question that's interesting to think about is, what happens if our models get even bigger? Um, so here I've done the, um, very scientific thing of drawing some lines in PowerPoint and seeing where they meet up. Um, and you can see that, um, if the trend holds at about 1 trillion parameters, um, we get to human level reading comprehension performance. Um, so if that's true it would be really astonishing. I actually do expect that a 1 trillion parameter model would be attainable in, I don't know, ten years or so, um, but of course, right, the trend isn't clear. So if you look at summarization for example, it seems like performance is already, uh, uh, topped out. Um, so I think this will be a really interesting thing kinda going forward, looking at the future of NLP, um, is how the scaling will change, um, the way NLP is approached. Um, the other interesting thing about GPT-2 was its reaction from uh, the media and also from other researchers. Um, and the real cause of a lot of the controversy about it was this statement from OpenAI. They said that, ''We're not going to release our full language model, um, because it's too dangerous, you know, our language model is too good.'' Um, so the media really enjoyed this and, you know, said that, uh, machine learning is going to break the Internet. Um, there's also some pretty interesting reactions from our researchers, right. So um, there's some, kind of, tongue-in-cheek responses here, right. You know, I trained the model on MNIST. Is it too dangerous for me to release it? Um, and similarly, we've done really great work but we can't release it it's too dangerous so you're just gonna have to trust us on this. Looking at more, kind of, reasoned, um, debate about this issue, you still see articles, um, arguing both sides. So these are two ar- articles, um, from The Gradient which is a, sort of, machine learning newsletter, um, and they're arguing precisely opposite sides of this issue, um, should it be released or not. So I guess I can briefly go over a few arguments for or against. There is, kind of, a lot of debate about this and I don't want to go too deep into a controversial issue, um, but here's a long list of, kind of, things people have said about this, right. So um, here's why you should release. One complaint is that, is this model really that special? There's nothing new going on here. It's just 10 times bigger than previous models, um, and there's also some arguments that, um, even if this one isn't released, you know, in five years everybody can train a model this good, um, and actually if you look at image recognition or look at images and speech data, um, it already is possible to synthesize highly convincing, um, fake images and fake speech. So kinda, what makes this thing different from those other, um, systems. And speaking of other systems, right, Photoshop has existed for a long time, so we can already convincingly fake images, um, people have just learned to adjust and learned that you shouldn't always trust what's in an image, um, because it may have been, um, altered in some way. Um, on the other hand, you could say, ''Okay, uh, Photoshop exists but, um, you can't, sort of, scale up Photoshop and start mass producing fake content the way you can with this sort of model,'' and they pointed at the danger of uh, fake news, um, fake reviews, um, in general just astroturfing, which means basically, uh, creating fake user content that's supporting a view you want other people to hold. Um, this is actually something that's already done, um, pretty widely by country- companies and governments. There's a lot of evidence for this, um, but they are of course hiring people to write all these comments on news articles let's say and we don't want to make their job any easier by producing a machine that could potentially do this. So um, I'm not really gonna take a side here, um, there's still a lot of debate about this. I think, you know, the main, the main takeaway here is that, as a community on people in machine learning and NLP, don't really have a handle on this, right? We are sort of caught by surprise by, um, OpenAI's, um, decision here and, um, uh, that means that, you know, there really is some figuring out that needs to be done on what exactly is responsible to release publicly. What kind of research problems should we be working on and so on. [NOISE] So yeah. Any questions about uh, this, this reaction or this debate in general? [NOISE] Okay. Um, I think something arising from this debate is, um, the question of, um, should really the ML people be the people making these, sort of, decisions or is there a need for more interdisciplinary science where we look at, um, experts in say, computer security, um, people from social sciences, um, you know, people who are experts in ethics, um, to look at these decisions. Um, right. So GPT-2 was definitely one example of where suddenly it seems like, um, our NLP technology has a lot of pitfalls, right. Where they could be used in a malicious way or they could cause damage. And I think this trend is only going to increase, um, if you look at, kind of, areas of NLP that people are working on, uh, increasingly people are working on really high stakes applications of NLP, um, and those often have really big, um, ramifications, especially if you think from the angle of bias and fairness. Um, so, so let's go over a couple examples of this, um- Um, one- so some, some areas where, where this is happening is people are looking at, uh, NLP to look at judicial decisions. So for example, should this person, uh, get bail or not? Um, for hiring decisions, right? So you look at someone's resume, you run NLP on it, and then you'd make a decision automatically, um, sh- should we throw out this resume or not? So do some, sort of, screening, um, grading tests. Um, if you take the GRE, um, your, your tests will be graded by a machine. Um, a person will also look at it, um, but nevertheless, um, that's, you know, a sometimes very impactful part of your life, um, when it's, when it's the tests that, um, inf- you know, affects your, um, acceptance into a school, let's say. Um, so I think there is- are some, some good sides of using Machine Learning in these kinds of contexts. So one is that we can pretty quickly evaluate, a machine learning system and search out. Does it have some, kind of, bias, just by running it on a bunch of data and seeing what it does, and also perhaps even more importantly, um, we can fix this, kind of, problem if it arises, right? So, um, it's probably easier to fix a machine learning system that screens resumes, than it is to s- to fix having, you know, 5,000 executives that are slightly sexist or something, right? So, so in this way, um, there is a, sort of, positive angle on using machine learning in these high-stakes, um, uh, decisions. Um, on the other hand, um, it's been pretty well, uh, s- known, and I know you had a lecture on bias and fairness, that machine learning often reflects bias in a data-set, um, it can even amplify bias in the data-set. Um, and there's concern of, kind of, a feedback loop where a biased algorithm actually will lead to the creation of more biased data, um, in which case these problems will only compound and get worse. Um, so for all of the, uh, high-impact decisions, um, I, I had listed on that slide, there are examples where things have gone awry, right? So Amazon had some AI that was, um, working as a recruiting tool and it turned out to be sexist. Um, um, there have been some, kind of, early pilots of using AI, um, in the justice system and those also have had, um, in some cases, really bad results. Um, if you look at automatic, automatic essay grading, um, it's not really a great, you know, NLP system, right? So here's an example, um, excerpt of an essay that, um, a automatic grading system used by the GRE test gives, uh, a very high score, um, but really it's just, kind of, a solid of, uh, big fancy words and that's enough to convince the model that this is a, a great essay. Um, the last, um, area I wanna talk about where, where, um, you can see there's really some risks and some pitfalls with using NLP technology, is chatbots. Um, so I think chatbots do have a side where they can be very beneficial. Um, Woebot is one example, is this company that has this chatbot you can talk to if you're not, um, feeling too great and it'll try to, um, I don't know, cheer you up. Um, so, so that, you know, could be a- a really nice piece of technology that helps people, um, but on the other hand, there's some big risks. So, so one example is Microsoft research had a chatbot trained on tweets, and it started quickly saying racist things and had to be pulled. Um, so I think all of this highlights that, um, as NLP is becoming more effective, people are seeing opportunities to use it in, um, increasingly high-stakes decisions and although, you know, there are some nice- there's some appeal to that, um, there's also a lot of risk. Um, any more questions on, uh, this sort of social impact of NLP? Okay. Um, last part of this lecture is looking more at future research, right? And in particular, um, I think a lot of the current research trends are, kind of reactions to BERT, um, right? So, so the question is what did BERT solve and- and what do we work on next? Um, so here are results on the GLUE benchmark. Um, that is, uh, a compendium of, uh, 10 natural language understanding tasks. Um, and you get an average score across those 10 tasks. Um, the left, uh, two- the two are, sorry the right- two right most models are, um, uh, s- non, uh, are just supervised trained machine learning systems, right? So we have Bag-of-Vectors, um, we instead use our fancy neural net architecture of BiLSTM + Attention and we get about five points. Um, but the gains from BERT, uh, really dwarf that difference, right? So, so BERT improves results by about, uh, 17 points and we end up being actually quite close, um, to human performance on these tasks. Um, so one, sort of, implication of this that people are wondering about is, is this, kind of, the death of architecture engineering? Um, so I'm sure all of you who have worked on the default final project, um, have seen a whole bunch of fancy pictures showing different, uh, architectures for solving SQuAD. Um, there are a lot of papers. They all propose some, kind of, uh, attention mechanism or something like that. Um, and, um, right. With BERT, it's, sort of, um, you don't need to do any of that, right? You just train a transformer and you give it enough data, and actually you're doing great on SQuAD, you know, maybe, um, these, uh, architectural enhancements are not necessarily, um, the key thing that'll drive progress in, uh, improving results on these tasks. Um, right. So, uh, if you look at this with the perspective of a researcher, you can think a researcher will say, "Okay, I can spend six months designing a fancy new architecture for SQuAD and if I do a good job maybe I'll improve results by 1, uh, F1 point." Um, but in the case of BERT, um, increasing the size of their model of 3x, which is the difference between, they've like a base size model and a large model, um, that improve results by 5 F1 points. Um, so it does seem to suggest we need to, sort of, re-prioritize, um, which avenues of research we'd pursue, because this architecture engineering isn't providing, kind of, gains for its time investment the way, uh, leveraging unlabeled data is. Um, so now, if you look at the SQuAD leaderboard, um, I think at least the top 20 entrants are all BERT plus something. Um, one other issue, uh, I think BERT has raised is that, um, we need harder tasks, right? BERT has almost solved SQuAD, if you define it by, uh, getting close to human performance. Um, so there's been, um, a growth in new datasets that are, uh, more challenging and there are a couple of ways in which, um, they can be more challenging. So one is, um, doing reading comprehension on longer documents, or doing it across more than one document. Um, one area is looking at c- uh, coming up with harder questions that require a multi-hop reasoning. Um, so that essentially meas- means you have to string together multiple supporting facts from different places, um, to produce the correct answer. Um, and another area, situating question-answering within a dialogue. Um, there's also been a, kind of, small detail with the construction of reading comprehension datasets, that has actually really affected, um, the, the difficulty of the task. And that is whether, um, when you create these datasets, um, is the person who writes questions about a passage, can they see that passage or not? Um, so of course, it's much easier to come up with a question that when you see the passage, and if you come up with a question without seeing the passage, you may not even have a answerable question. Um, but the problem with looking at the passage is that first of all it's not realistic, right? So, uh, if I'm asking a question, you know, I'm not going to have usually the paragraph that answers that question sitting in front of me. Um, on top of that, it really encourages easy questions, right? So, um, if you're a Mechanical Turker, and you're paid to write as many questions as possible, and then you see an article that says, um, I don't know, you know, uh, Abraham Lincoln was the 16th president of the United States, um, what are you gonna write? As your question, you're gonna write, who was the 16th president of the United States. You're not gonna write something more interesting that's harder to answer. Um, so- so this is one way in which crowdsourced datasets have changed, um, people are now making sure questions are, sort of, independent of, of the contexts. Um, so I'm gonna briefly, uh, go over a couple of new datasets in this line. So one is called QuAC, which stands for Question Answering in Context. Um, in this dataset, there is a teacher and a student, um, the teacher sees a Wikipedia article. The student wants to learn about this Wikipedia article, and the goal is to train a machine learning model that acts as the teacher. Um, so you can imagine maybe in the future, this, sort of, technology would be useful for, uh, um, education for, kind of, having, uh, adding some automation. Um, uh, one thing that makes this task difficult is that, uh, questions depend on the entire history of the conversation. Um, so for example, uh, if you look, um, on the left here, uh, the example, um, dialogue, um, the third question is was he the star? Um, clearly you can't answer that question unless you look back earlier in the dialogue, and realize that the subject of this, uh, conversation is Daffy Duck. Um, a- and, sort of, because this dataset is more challenging, and you can see there's a, there's a much bigger gap to human performance, right? So if you train some BERT with some extensions, you'll st- uh, the results are still like 15 F1 points worse than human performance. Um, um, here's one other dataset, um, called HotPotQA. Um, it is, uh, designed instead for multi-hop reasoning. Um, so essentially, in order to answer a question, you have to look at multiple documents, you have to look at different facts from those documents, and perform some inference, um, to get what the correct answer is. Um, so I think, you know, this is a- a much harder task. And again, um, there's a much bigger gap between human performance. Um, any questions on, uh, new datasets, um, harder chi- tasks for NLP? Okay. Um, I'm gonna, kind of, rapid fire and go through, um, a couple of more areas in the last minutes of this talk. Um, so multitask learning I think is really growing in importance. Um, of course, um, you've had a whole lecture on this, right? So I'm not gonna spend too much time on it. Um, but maybe one, uh, point of interest is that if you look at performance on this GLUE benchmark, so this benchmark for natural language understanding, um, all the top couple results, um, are- that are now actually surpassing BERT in performance are- is taking BERT and training it in a multi-task way. Um, I think another interesting, uh, motivation for multi-task learning is that if you are training BERT, you have a really, really large model and one way to make more efficient use of that model is training it to do many things at once. Another area that's definitely important, um, and I think will be important going in the future is dealing with low-resource settings. Um, and here I'm using a really broad, uh, definition of resources, right. So that could mean compute power, um, you know, BERT is great but it also takes huge amounts of compute to run it. So it's not realistic to say, um, if you're building, let's say a mobile, uh, an app for a mobile device that you could run a model the size of BERT. Um, as I already ga- went into earlier in this talk, um, you know, low-resource languages is an area that I think is pretty, um, under-represented in NLP research right now, because most datasets are in English, um, but I do think, right, there's a really, you know, large number of people that in order to benefit from NLP technology, um, we'll need to have technologies that work well in a lot of different languages especially those without much training data. And, um, speaking of low- low amounts of training data, I think in general this is, uh, a- an interesting area of research, um, within machine learning. Actually, people are, um, working a lot on this as well. Um, so a term is often, uh, a term often used is few shot learning. Um, and that essentially means being able to train a machine learning model that only sees, let's say five or ten examples. Um, one motivation there is, um, I think a clear distinction between how our existing machine learning systems learn, and how humans learn is that, um, humans can generalize very quickly from five or so examples. Um, if you're training a neural net, you normally need, you know, thousands of examples or perhaps even tens of thousands, hundreds of thousands of examples to get something that works. Um, so I also see this being a pretty important area in the future. Um, the last area where I want to go in, um, a little bit more depth is interpreting and understanding models. Um, so, so really there's two aspects of this. One is if I have a machine learning model and it makes a prediction, I would like to be able to, uh, know why did it make that prediction? So gets some rationale, get some explanation, um, that would especially be important in an area like health care, right? So if you're a doctor and you're making a decision, um, it's probably not good enough for your machine learning model to say, "Patient has disease X." You really want it to say, "Patient has disease X for these reasons." Um, because then you as a doctor can double-check, and, and try to validate the, the, uh, machine's, um, thinking I guess, um, to come up with that diagnosis. Um, the other area of interpreting understanding models is more of a scientific question, right? Is we know things like BERT work really well, um, we want to know why do they work well? What -what what aspects of language do they model? Um, what things don't they model? Um, and that might lead to, um, ideas of improving, um, those- those models. Um, so, um, here is a, uh, couple slides on the main approach for evalu- answering the sort of scientific questions. What does a machine-learning model learn? Um, what you do is you have a model so let's say it's BERT. It takes as input a sequence of words, um, it produces as output a sequence of vectors, um, we want to ask does it know for example, the part of speech of words? So, so it does in its vector representations, does that capture something about syntax? Um, and a simple way of asking this question is train another classifier on top of BERT, uh, that's trained to do, um, let's say part-of-speech tagging. Um, but we only, um, backprop into that diagnostic classifier itself. So in other words we're treating the output of BERT, um, that sequence of vectors as a fixed input, and we're sort of probing those vectors to see, um, do they contain, um, information about a part of speech that this second diagnostic classifier on top can decode, um, to get the correct labels? Um, so, um, it was kind of quite a few concerns here. Um, one concern is, uh, if you make your diagnostic classifier too complicated, it can just solve the classif- the task all on itself, and it can basically ignore, uh, whatever representations were produced by BERT. Um, so- so the kind of standard thing right now is to use a single softmax layer on top of BERT, um, to do these decisions. Um, and there's been a whole bunch of tasks proposed for evaluating essentially the linguistic knowledge of these models. Um, so you could do part-of-speech tagging, you could do more semantic tasks like, uh, relation extraction, um, or- or something like co-reference. Um, and this is a pretty active area of work. Um, here is, uh, just one, uh, plot showing some of the results, um, of this approach. So here what we're doing is we're adding diagnostic classifiers to different layers of BERT, and we are seeing which layers of BERT are more useful for particular tasks. Um, and, um, something kind of interesting comes out of this which is that, um, the different layers of BERT seem to be corresponding, um, fairly well with notions of, uh, different layers of li- of linguistics. Um, so, uh, dependency parsing which is a syntactic task, um, it's, uh, considered sort of a, you know, medium level task in understanding a sentence. Um, the medium layers of BERT, so layers kind of 6 through 8 or something, are the ones best at dependency parsing. Um, if you have a se- very semantic task like sentiment analysis, um, where you're trying to learn some kind of, uh, semantic property of the whole sentence, um, then the very last layers of BERT are the ones that seem to encode the most information about- about this, uh, phenomenon. Um, okay. So this is almost it for the talk, um, I just have one slide here of, uh, um, NLP not in kind of the academic researching context, which I have already been talking a lot about but NLP in industry, and really there's rapid progress there. And I wanted to point to you two areas where I think there's especially a large interest in using NLP technology. Um, one is dialogue, um, so for things like chatbots, right? There's the Alexa Prize where they're actually investing a lot of money in, um, having groups figure out how to improve chitchat dialogue. Um, there's also I think a lot of potential for customer service, right? So improving basically automated systems that'll, um, you know, book you a flight, or help you cancel a subscription, or anything like that. Um, and similarly, there's a lot of potential in health care. Um, one is understanding the records of someone who, um, is sick and to help them- to help with diagnoses. Um, I think another, um, equally important area is actually, uh, parsing, uh, biomedical papers. Um, so, um, the number of biomedical papers that are being written is really insane, um, it's, it's way larger than the number of computer science papers that are being written. [NOISE] Um, often if you're a doctor, or if you're a researcher, um, in medicine, you might want to look up something very specific, right? You might want to know what is the effect of this particular drug on this particular gene, or a cell with this particular gene. Um, there's no good way right now of searching through, um, hundreds of thousands of papers to find if someone has a- has, uh, done this experiment and have results for this, um, particular combination of things. Um, so automated reading of all this biomedical literature, um, could have a lot of value. Okay, um, to conclude, um, there's been rapid progress in the last five years due to deep learning, um, in NLP. Um, in the last year, we've seen another really kind of, uh, a dramatic increase in the capability of our systems, thanks to, uh, using unlabeled data. So that's methods like BERT. Um, and, um, the other kind of thing that's I think important to think about is that, NLP systems are starting to be at a place where they can have big social impact. Um, so that makes some issues like bias and security very important. Um, thank you. Uh, good luck finishing all your projects. [APPLAUSE].
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_18_Constituency_Parsing_TreeRNNs.txt
Okay. Hi everyone. Let's get started [NOISE] Okay. So, so for today's lecture, what we're gonna do is look at the topic of having Tree Recursive Neural Networks. I mean, this is actually, uh, a topic which I feel especially fond of and attached to, because actually when we started doing deep learning for NLP here at Stanford in 2010, really for the sort of period from 2010 to 2015, the dominant set of ideas that we were working on was this topic of how you could build a recur- recursive tree structure into neural networks. So in a way, it's kind of funny that I'm only getting to it now. I mean, there are sort of reasons for that, but I think there are a bunch of interesting ideas here which relate closely to linguistic structure, and so it's good stuff to have seen. But in practice, um, these ideas have proven kind of hard to scale and not necessarily to work better in practice than the kind of things that we've spent more time on meaning things like looking at LSTMs and looking at transformers, and things like that. And so that's kinda why we sort of shunted them towards the end of the curriculum. But I want to sort of say something about the motivations, and the ways you can build tree structures, and neural networks, and look at some of the possibilities, um, we explored um, in during this class. Um, another fact about this class is actually this is the last class I'm going to give. Um, so two more classes next week. Don't forget about next week, um, CS224N classes, um, but on Tuesday, um, we've gotten the final invited speaker, , who's a great speaker and has tons of interesting stuff to say about fairness and ethics in NLP and AI. And then for the final lecture, one of my- another of my PhD students is gonna give that and talk about some of the recent, what's been happening in deep learning in 2018, '19, of some of the sort of recent developments in NLP and deep learning. Um, so, um, let's- I'll say my farewells at the end of this one. Um, so hopefully, everyone has submitted, um, their, um, milestone for their final project. If you haven't, you should really begin your milestone in- um, you know, it's inevitable that somewhere around here, there start to be problems that people have the situation that nothing works, and everything is too slow, and you panic. Um, and, um, this happens. Um, I wish you luck, of course. I mean, what can you do about it? I mean, it can be really hard when you have things that don't work as to work out, why they don't work, and how to fix them. I mean, I think often the best thing to do is really to go back to something simple that you can get working and to work forward from there again. It also really helps to have really small data sets. I really recommend the strategy of sort of having a 10-item, or 20-item data set and checking that your model works perfectly, over-trains to 100 percent accuracy on that kind of data set saves you huge amounts of time, and it's sort of after you've gotten something simple working on a small amount of data, that's the right time to sort of then, um, expand forward again. Um, you should definitely always make sure that you can completely overfit on your training data set. That's sort of, um, not quite a proof, but it's at least a first good requirement for your model being implemented properly. Um, you, you know part of the trick of being a successful deep learning researcher is actually managing to get things done and not wasting a ton of time. And so it definitely always helps just to be, you know, plotting as you go along your training and dev errors so that you can sort of tell if things are working, or if things aren't working, and you should abandon and start again with a new experiment, tha- that just things like that save you hours and get you, uh, more done. And so then once things are working, there's sort of a whole bunch of things to make it work better. There's regularization with L2 and Dropout, there's time to do hyperparameter search, um, and, you know, often doing these things and make quite a lot of difference to what your final results are and so it's good to have time to do those things. But clearly, you want to get things, um, working first before you go on to that, um, and sort of really encourage people to still stop by in office hours if you've got any problems, and we'll try our best to help out here within the limitations of what we can do from just being hit cold with problems. Okay, um, yeah. So, I wanted to sort of just say some general remarks about, um, language and theories of language, um, that, in the context that motivate these tree recursive networks. Um, so this is an art installation at Carnegie Mellon University. And as an NLP person, I really love this art installation. Um, so we need better art installations around the Stanford School of Engineering. Um, so this is the bag-of-words art Installation. There's the bag with a lot of words in it. And you see down here, there were the stop words, the the, and the us, that had fallen out of the bag, and are represented on the ground as the stop words. Beautiful artwork, right? So, um, one of the interesting things that has been found about NLP models of language, and I think this is even more true in the deep learning world than it used to be previously is, boy, you can do a lot with bag-of-words models, right? That you can just often get a lot of power by saying, well, let's get our neural word vectors, we're gonna average them or max pool them, or something like this, and do nothing more, and that gives me a pretty good sentence representation or document representation that I could use in a classifier or something. And sometimes, you can do not much more than that and get even better. So people have done things like deep averaging networks where you're taking the output of a bag-of-words model and sort of feeding it through a couple more layers and improving things. So that is in complete distinction to what's been dominant in linguistics of looking at language structure. That typically in linguistics the emphasis has been on identifying kind of huge amounts of structure of linguistic utterances through very complex formalisms. I guess this is sort of a bit of a picture of a Chomsky minimalism syntactic tree, and the one up at the top is a bit of a picture of head-driven phrase structure grammar. Which was a theory that was predominantly, um, developed at Stanford in the '90s. Um, but sort of very complex data structures and articulated structures used to describe linguistics. And there's a huge gap between these two things. And you might think that, you know, surely, there's some good points in the middle where we have a certain amount of structure, and that's going to help us do what we want. And so in particular, um, that if we're wanting to semantically interpret language, it seems like we don't just want to have word vectors, we want to have meanings of bigger phrases. So here's the snowboarders leaping over a mogul, and a person on a snowboard jumps into the air. And what we'd like to be able to say is that the snowboarder means basically the same thing as a person on the snowboard. So we wanted to have these chunks of language which in linguistics will be constituent phrases, and say that they have a meaning, and we'd like to be able to compare their meaning. Now, we've looked at at least one tool that allows us to have chunks of language, right? Because we looked at convolutional neural networks where you could take three words and make a representation of the convolutional neural network, but the fundamental difference is that in human languages you have these chunks that have meaning, that are of different sizes. So we'd like to say the snowboarder is pretty much semantically equivalent to a person on the snowboard, but the top one is two words long, and the bottom one is five words long. And so if we're gonna be able to do that, um, we somehow wanted to have these sort of constituent chunks and be able to work with and represent them in neural networks. And that's sort of, um, the central idea of what motivated some of the sort of tree structured neural networks that I'm about to show you. There's another related thing that you might wanna think about is, you know, a person on a snowboard, how do human beings manage to understand what that means? And then a person on a snowboard jumps into the air, how does people manage to understand what that means? And it sort of seems like the only possible answer to this is what's normally referred to as the principle of compositionality. That people know the word person, they know the word on, they know the word snowboard, therefore, they can work out what on a snowboard means, um, and they can work out what person on a snowboard means by knowing the meanings of components and putting them together into bigger pieces. There's a f- there's a famous, um, applied mathematician statistician, um, at Brown University, Stu Geman, and I guess the way he summarized this is, either the principle of compositionality is true, or God exists. Um, for [LAUGHTER] which he was, um, well you can take that as- as you want but, you know, um, I think what he meant was well, you know, you can just make these infinite number of infinitely long sentences and human beings understand them, that it just has to be that people can know about words and ways to combine meanings and-and make bigger meanings because, you know, how else could it possibly work that people could understand sentences. And so we want to be able to do that. We want to be able to work out semantic compositions of smaller elements, to work out the meanings of bigger pieces. And that this obviously isn't only a linguistic thing, compositionality, um, appears in other places as well, right. So, um, if you want to understand how some piece of machinery works, what you kind of wanna know is it has different sub-components. And if you can understand how the different sub-components work and how they're fitted together, um, then you might have some understanding of how the whole scene works. Um, and, um, compositionality seems to be wor- at work in vision as well. So here is a scene and again it seems like this scene has parts. So there are little parts that go together, right. So there are people that go together into a crowd of people, and there's a roof and a second floor and another bit of roof. and a first floor that go together into a picture of this church. And so this is also kind of a compositional scene in which pieces go together. So it sort of seems like certainly for language understanding, and then really for a lot of the other things that we use for intelligence, that we somehow need to be able to understand bigger things from knowing about smaller parts. Um, yeah, so computational- so the most- I mentioned this earlier, sometime the most famous, um, linguist is Noam Chomsky at MIT and, um, you know, really computational linguists, a lot of the time haven't been that friendly to, um, linguistics linguists and in particular some of Noam Chomsky's, um, theories of language because really he's never been sympathetic to the idea of machine learning. Or in general does some of the empirical ability to learn from data. He's sort of always been, um, [NOISE] wanting to refuse to that exists. But, um, if we nevertheless look for a little bit of, um, insight on that. Um, you know, this is a recent paper of Chomsky's with authors and that they're sort of trying to give a version of what is unique about human language. And essentially what they, um, zero in on is that well, if you're sort of looking at, you know, humans versus other fairly intelligent creatures. They suggest that the defining difference of human beings, um, is that they have this ability to model recursion. And so the- this paper argues that the- the singular distinction that allowed language to develop in human beings was that we could put together smaller parts to make bigger things, in a recursive process and that that was the sort of defining new ability. Um, not sure I- not sure I believe that or not, um, [LAUGHTER] you can decide what you think. But what I think, um, is certainly the case is that- it's just incontrovertible that the structure of human language sentences have these pieces, um, constituents that then form together hierarchically or recursively into bigger pieces as you go up in the tree. And in particular you get this recursion where you get a little noun phrase meat, which then appears in a bigger noun phrase like spaghetti with meat. And you can repeat that several times, giving you a recursive structure. And I have an example of that in blue up at the top. So the person standing next to the man from the company that purchased the firm that you used to work at, um, that whole thing is a big noun phrase. Um, but inside that there's a noun phrase, the man from the company that purchased the firm that you used to work at, which is another big noun phrase. And well inside that, um, there are smaller noun phrases like, the company that purchased the firm you used to work at. But, you know, it's still got inside that noun phrases like, the firm that you used to work at. And actually even that's got it inside, the smaller noun phrase, which is just the word you. So an individual pronoun is also a noun phrase. Um, so just kind of structuring of language where you get this sort of hierarchical structure and the same kind of things inside them. I think that's just sort of totally, totally correct. Um, the- the claim then that, you know, our language is recursive, I mean, in a formal sense is not quite clear that that's, uh, it's a clear thing. And that's the reason- to say something is recursive, it has to repeat out to infinity, right. So as soon as you put any bound on something, and you say, "Look that's a noun phrase you just gave me with five levels of nesting." That's pretty implausible that someone is going to say that. And so as soon as you sort of, um, want to make an argument like, okay even if they said that, no one is going to say a noun phrase with 10 levels of nesting. And if you put some hard limit on it like that, um, then in some sense it's not truly recursive because it doesn't go out to infinity. Um, but, you know, regardless what you think about that, that doesn't negate the basic argument that you get this hierarchical structuring with the same kinds of things like noun phrases, sentences, verb phrases, appearing inside each other in a way that has no clear bound. Like to the extent that I show you a complex sentence, you can say I can make that an even bigger, more complex sentence by putting it inside, you said to me that, and then saying, um, my sentence, right. So that's the sense in which it does appear to be a recursive generative process, even though practically there are limits to how complex sentences people say. And so that's the kind of structure that gets captured in these constituency, um, structure trees. So before the early time when we talked about parsing and you guys did some of it, I emphasized dependency parsing. Um, but the other kind of parsing which is actually the kind that the models I'm going to talk about today was using, was this idea of what's often called constituency parsing or linguists often call it phrase structure grammars, um, or in sort of computer science formal language theory. These are context-free grammars, where, um, we're having, um, these, um, non-terminals like noun phrase, and verb phrase, and that's inside another noun phrases, it's inside another verb phrase, which is inside more verb phrases, heading up the sentence. And so these are our constituency grammars. And when we've occasionally mentioned the Penn Treebank tree, this was kind of an original Penn Treebank tree which is basically, uh, phrase structure grammar like, this with sort of various extra annotations, um, put on the nodes. Okay, so what did seem- what- what do you- to capture some of these properties, it seems like we'd like to have a neural model that can make use of some of this same kind of tree structure. And so what we'd like to do for working out semantic similarity of constituents, is we want to not only have a word vector space like we started off with right at the beginning of the quarter, but we'd like to be able to take bigger constituents like noun phrases, the country of my birth, and the place where I was born, and also give them a meaning. And so it seems like what we'd like to do is have a method of computing the meaning of any phrase in a compositional manner, such that the end result is also that these phrases could be stuck inside our vector space models. So we're still going to stick with our vector space semantics of phrases, and we wanna comp- compute the meanings of phrases. And so then the question is, how could we go about doing that? And well answer number one is we're gonna use the principle of compositionality since we're sure it's right, and so, well, what the principle of compositionality essentially says, if you want to work out the meaning- or here it says of a sentence. But the meaning of any phrase, any constituent is you're going to build it by knowing the meanings of its words, and then having rules that combine these meanings. So starting off with the country of my birth, I should be able to calculate a meaning of my birth, and meaning of the country, and meaning of of the- my birth and then a meaning of the country of my birth. So we'd have meaning composition rules which will let us calculate meanings upwards for larger constituents or sentences. Um, so that seems kind of the right thing to do. And so then the question is well, can we, um, then build a model of how to do that? Well, here's sort of a straightforward way of doing this, okay. So we- we have word vectors for the words that we've calculated. And what we'd like to do is work out, um- Then a meaning representation of this sentence. And at this point we sort of have two things to do. We have parsing to do of working out what's the right structure of the sentence, and then we have meaning computation to do of working out what is the meaning representation of this sentence. Um, so for parsing we'd sort of be building, sort of noun phrase, prepositional phrase, verb phrase, sentence kind of units, um, to get "the cat sat on the mat", and then will, what, we, if we had that, we could then run some kind of meaning computation program, and give us sort of a vector space, um, meaning of these sentences. So that's kind of what we want, is to do both of those, and in a little bit I'll show you an example of the kind of one way that you go about approaching that. But before I do that, just sort of stepping back for a moment as to what's different here, right? That here we had our recurrent neural network which in some sense has been our workhorse tool in this class up to now, and it gives you, it gives you a representation of the meaning of the country of my birth sort of, you could either say that's the meaning of, um, the country of my birth, or we talked about other tricks like, doing max pooling across all of these, or you could have a separate node out here, which so does attention over these. So it does give you a sort of representation, um, of the meaning of this, of any, um, sub-sequence of words as well. Um, but they, they're sort of different, right? That this what, the top, the tree recursive neural network, it requires a sentence or any kind of phrase to have a tree structure. So we know what its component parts are, but then we're working out meaning representations for the phrase that is sensitive to what its syntactic structure is, that how the words go together to build phrases. Whereas for the recurrent neural network we're just in an oblivious way running a sequence model along, and say and compute things, and in the obvious, it doesn't in any obvious way give a meaning representation of, of my birth, or my birth contained inside that. We sort of only have a meaning representation for the whole sequence, whereas if we're doing things this way, um, we do have meaning representations for the different meaningful parts of the sentence. Okay. That makes sense of what we're trying to do? Okay. So how could we do, go about doing that? Um, well, the idea of how we could go about doing that is, if we work bottom-up, at the very bottom we have word vectors, and so we want to recursively compute the meaning of bigger constituents. So if we wanted to compute the meaning of "on the mat" what we can do is say, well, we have, already have a meaning representation of, on and mat. So if we could feed those into a neural network, because that's our one tool, we could maybe get out of it two things. We could get out of it a goodness score. So this is what we're going to use for parsing. We're going to say, "Do you belie- do you believe you can put together "on" and the "mat" to form a good constituent that's part of a parse tree? And this will be a big positive number if the answer is true, and negative if it's not true, and then we have a meaning composition device, which says, "Okay, um, if you put together these two things, what would be the meaning representation of what we put together?" And so this is the first model that we explored which was doing this in a pretty simple way, right? So here was our meaning composition, um, device that we concatenated the two vectors of the constituents, we multiply them by a matrix, add a bias as usual, put it through a tan h. Uh, this work is old enough, it's sort of before things, like, ReLUs became popular, but maybe it's better to have a tan h anyway, um, fit more like, a recurrent neural network, and so this was our meaning composition that gave the meaning of the parent. And then to the side, what the score of it was as to whether this was a good phrase, we were taking that parent vector representation, and multiplying it by another vector, and that was giving us out a number. Um, if you think about it a bit while we're doing this, you might think that this isn't quite a perfect model of meaning composition, and later on in the class I'll talk about some more complex models, um, that we then started to explore. Um, but this is sort of enough to get us going, and this gave us a way of building a recursive neural network parser which both found parsers, and worked out a meaning representation for them. And so the way we did this was in the simplest possible way really, which was to have a greedy parser. So if we start off with the "cat sat on the mat", what we could do is say, well, maybe you should join "the" and "cat" together. Let's try that. Run it through our neural network, it'll get a score and a meaning representation, and while we could try doing that for "cat" and "sat" we could try doing it for "sat" and "on". We could try doing it for "on" and "the" we could try doing it for "the" and "mat". And then at this point we'd say, okay, well the, the best phrase that we can make combining these word vectors is the one for "the cat". So let's just commit to that one, and it has this semantic representation, and at this point we can essentially repeat. Now, all the work we did over there we can just reuse because nothing has changed, but we can also consider now joining the "cat" as a constituent with "sat" and get a score for that. And so at this point we decide, okay, the mat is the best constituent to build, commit to that, calculate a meaning representation for "on the mat". That looks good, commit to that, and kind of keep on chugging up, and so we've got a mechanism for sort of choosing a parse of a sentence in a, in a greedy manner. But, you know, when we looked at the dependency parsing, we're also doing that greedily, right? Um, and coming up with a meaning representation. Okay. So that was our first model of having a tree recursive neural network, and using it for parsing. Um, there are a few more details here, some of which probably aren't super, um, important at this point, right? So we could score a tree by summing the scores at each node, um, for working out, for the optimization we were working out, we're using this kind of max-margin loss that we've looked at in other places. Um, the simplest way to do things is completely greedily. You just, um, find the best local decision at each point, and make that structure, and keep on going. But if you wanna do things a bit better, and we explored this, um, you could say, um, we could do beam search. We could explore out several good ways of merging, and then decide later higher up the tree as to which was the best way, um, to merge. Um, we haven't talked about it in this class, but just to mention, um, something in case people have seen it is, um, traditional constituency parsing where you have symbols here, like, NP or VP. Um, there exist efficient dynamic programming algorithms where you can find the optimal parse of a sentence in polynomial time. So in, in cubic time. So if you have a regular context-free grammar, and well, so regular probabilistic context-free grammar, um, and if you want to know what is the best parse of the sentence according to the probabilistic context-free grammar, you can write a cubic time dynamic programming algorithm and you can find it. That's good. And in the old days of CS224N, um, before neural networks we used to have everyone do that. The, the most, the most brain-breaking assignment of the old CS224N was writing this dynamic program to do context-free grammar parsing of a sentence. Um, the slightly sad fact is, once you go to these kind of neural network representations, you can't write clever dynamic programming algorithms anymore, because clever dynamic programming algorithms only work when you have symbols from a reasonably small set for your non-terminals because if that's the case, you can, you kind of have collisions, right? You have lots of ways of parsing stuff lower down, which kind of, uh, turn out to be different ways to make a noun phrase, or different ways to make a prepositional phrase, and therefore you can save work with dynamic programming. If you've got a model like this, since everything that you build is going through layers of neural network, and you've got a meaning representation, some high-dimensional vector, things are never going to collide, and so you can never save work by doing dynamic programming. And so, um, you're either doing exponential work to explore out everything, or else you're using some kind of beam to explore a bunch of likely stuff. Yeah. Um, we actually also applied this, um, to vision at the same time. So it wasn't just sort of completely a vague motivation of, um, visual scenes have parts that we actually started exploring that well you could take, um, these pieces of scenes and then work out, um, representations for scenes using a similar form of compositionality. And so in particular, um, there was sort of this dataset that was being used for, um, multi-class segmentation in vision, where you start off with very small patches and then you wanna combine them up into parts of a scene of sort of recognizing which part of the picture was the building, the sky, the road, various other classes. And we were actually at the time able to do this really rather well, um, using one of these tree recursive structured neural networks better than preceding work in vision had done in the late 2000s decade. Okay. So how can we- how can we build neural networks, um, that do this kind of stuff? Um, so when- when we started off exploring these tree structured neural networks, um, we thought that this was a cool original idea and no one had worked on tree structured neural networks successfully before. Um, but it turned out we were wrong, that there were a couple of Germans in the mid-1990s, um, had actually started looking at tree structured neural networks and had worked out, um, the math of them. So corresponding to the backpropagation through time algorithm, um, that Abby talked about when we were doing recurrent neural networks. They worked out the tree structured case which they called backpropagation, um, through structure. Um, there are several slides on this in the slides but I think I'm gonna sort of skip them. If anyone wants to look at them, they're on the web and you can look at them. I mean, there isn't actually anything that's new. So if you remember with- with bad scarring or something that was early lectures of this class of working out, um, the derivatives of neural networks and how it worked with recurrent neural networks. It's sort of the same, right. You have this recurrent matrix at different levels of tree structure. You're summing the derivatives of everywhere it turns up. The only difference is sort of because we now have tree structure, you're sort of splitting things downwards. Um, so yes. So forward prop we kind of compute it forwards. And then when we're doing back prop, when we've had the backward propagation we have the error signal coming from above. We then, um, combine it, um, with the calculations at this node. And then we're sort of sending it back in a tree structure down to each of the branches underneath us. So that was our first version of things and we got some decent results. We got this good vision results that I showed you and it sort of seemed to do, um, some good for, um, language both for parsing and doing- We had some results I haven't actually included here of sort of doing paraphrase, um, judgment between sentences and it- it modeled things, um, fairly well. But once we started thinking about it more it seemed like that very simple neural net function couldn't possibly compute the kind of meanings that we wanted to compute for sentence meanings. And so we then sort of set about trying to come up with some more complex ways of working out kind of meaning composition functions and nodes that could then be used to build a better neural network. And sort of some- some of the essence of that is on this slide. But, you know, for the first version we just didn't have enough complexity of neural network, frankly, right? So when we had two constituents we concatenated them and multiply that by a weight, uh, weight matrix. Um, and that was sort of essentially all we had. And, um, as I hope you've gotten more of a sense of in this class. If you just concatenate and multiply by a weight matrix, you're not actually modeling the interaction between these two vectors, right. Because you can think of this weight matrix as just sort of being divided in two and half of it multiplies this vector, and half of it multiplies this vector. So the meanings of these two things don't act on each other. And so somehow you have to make your neural network, um, more complex than that. But the other way in which this seemed too simple is in the first model, we had just one weight matrix which we use for everything. And, ah, at least if you're a linguist and you're thinking about the structure of language you might start thinking of well, wait a minute, sometimes you're gonna be putting together a verb and an object noun phrase. Um, hit the ball. Sometimes you're gonna be putting together an article and a noun, uh, ball. Sometimes you're gonna be doing adjectival modification blue ball. These things are very different in their semantics. Can it really be the case that you can just have one weight matrix that is this universal composition function for putting together the meaning of phrases? Could that possibly work? And you sort of might suspect, um, it doesn't work. Um, and so I'm gonna go on and, um, show, um, some of those different things. But really, um, before I show the different things, um, I'm gonna show one more version that's sort of related to the first thing, which actually gave a pretty successful and good parser, um, for doing, um, context-free style constituency parsing. And so this was another way of getting away from the parsing being completely greedy. Um, which was to actually split apart the two parts of g. We have to come up with a tree structure for our sentence from, 'Let's compute the meaning of the sentence'. And so the thinking was, well, in terms of deciding what's a good tree structure for a sentence, that's actually something you can do pretty well with the symbolic grammar. But the problems with symbolic grammars aren't that they can't put tree structures over sentences. The problems you have with those grammars is that, they can't compute meaning representation and they're not very good at choosing between alternative tree structures. But we can divide up the two parts. So what we can do is say, well, let's just use a regular probabilistic Context-Free Grammar to generate possible tree structures for sentences. We can generate a k best list and say, what are the 50 best, um, context-free grammar structures for this sentence? And that's something we can do very efficiently with dynamic programming algorithms. And then we can work out a neural net, um, that will work out the meaning representation of the sentence. Um, and so that led to this, um, what's called syntactically untied recursive neural network. Um, so essentially what this is saying is that we ha- for each node and the sentence it's got a category, um, of a symbolic context-free grammar. So they're category A and B and C. So when we put things together we'll be able to say, okay. We've got a rule that says, um, X goes to BC, so that licenses this node here. So that part of the parsing is symbolic. Then- then we want to, um, work out the meaning of this phrase. Um, and well, the second problem I talked about was surely just having a one way of doing composition is expecting a lot too much to be able to have sort of verb and object versus adjective and noun composed the same way. So we have this idea of well, since we now know about the syntactic categories of the children that we maybe know that this is an adjective and this is a noun. What we could do is have different weight matrices for composition depending on what the categories are. So rather than where before there was just this one universal weight matrix which was meant to do all meaning composition. Here we can have, this is the weight matrix for combining together the meanings of an adjective and a noun and it will compute, um, the meaning of this constituent. But then we'll have a different weight matrix for combining together the meanings of a determiner and a noun phrase or something like that. Okay. Um, yes. So I sort of always said this one I guess, um, we wanted to be able to do things quickly. And so our solution to be able to do that is we sort of used a probabilistic context-free grammar to find likely parses, um, and then only worked out our meaning for ones that were, um, quite probable. And so we call this result a compositional vector grammar which was a combination of a PCFG and a tree recursive neural network. Um, and yeah. So, um, essentially at the time, this actually gave a pretty good constituency parser. So there are sort of lots of results here. The top ones are kind of our classic older, um, Stanford Parser which is a PCFG, the kind of parsers that people had built. This is our compositional vector grammar that the time of this being done in 2013, it wasn't the very best parser available. There had been some better work by Eugene Charniak at Brown. But we actually had a pretty good parser coming out of that system. But what was perhaps a bit more interesting was we, we didn't only have a parser that was meant to give the right parse trees. We are also computing meaning representations of nodes. And as a kind of a consequence of that, you can look at not only meaning representations of nodes. You could learn about the weight matrices that these models were learning, um, when they combine together meanings. So remember we had these sort of category-specific W matrices, that were going together with the children to work out the meaning. Um, so these are a little bit hard to interpret. But the deal is, when we load these matrices, we initialize them as a pair of diagonal matrices. So these are sort of two by one rectangular matrices because there are two children. Um, so half of it is, um, multiplying the left child, the other half is multiplying the right child. And we initialize them as sort of like a compi- two identity matrices next to each other which would give us the sort of default semantics of just averaging until something different was learned in the, in the, in the weight vectors. And to the extent that sort of nothing interesting has been learned by the model, you'll get yellow along the diagonal and this sort of sky blue in the rest of the field. And to the extent that it's learned something interesting to take out of the semantics of a child, you will then start to see reds and oranges on the diagonal, and dark blues and greens and stuff in the rest of the field. So what you find is that if you train this model, it's learning about which children of a phrase are actually the important ones. Um, so these ones are saying that if you're combining together a noun phrase and the coordination, so something like "the cat and", that most of the semantics have to be found in "the cat" and not much of the semantics is going to be found in "and". Whereas if you are combining together a possessive pronoun, something like her or his, um, with a noun phrase inside it like, um, her tabby cat or something like that. Then most of the meaning is to be found inside the tabby cat constituent. So it's actually learning where the important semantics of sentences is. Um, and there're lots of examples of that. Um, yeah. This one sort of- so this one shows a variety of modification structures where adjectives or adverbs, um, modify either a noun phrase or an adjective phrase or just a single adjective is multiplying a noun phrase. And the thing that you seem to notice is that there are particular dimensions which are kind of capturing sort of modification meanings. So dimension 6 and dimension 11 is sort of showing up in these different, um, combinations here, as sort of capturing meaning components. So that was kind of neat. And so this slightly more complex model actually worked pretty well at capturing a meaning of phrases and sentences. So in this test here, we were giving it- the system a test sentence and saying well, what are the other- what are sentences that are most similar in meaning, nearest to paraphrases in our corpus for this sentence? So if all the figures are adjusted for seasonal variations, the two most similar other sentences in the corpus were, all the numbers are adjusted for seasonal vet fluctuation. That's a pretty easy one. Or all the figures are adjusted to remove usual seasonal patterns. So that seems to be working pretty well. "Knight-Ridder wouldn't a comment on the author, Harsco declined to say what country placed the order." The semantics there are a bit more different but it seems like it is capturing something similar. "Um, Coastal wouldn't disclose the terms." That's kind of a really interesting one, because that one is actually very similar in meaning but it's expressed in a very different way in terms of the words and the syntactic structure that are used. Okay, so that was progress because now we could have different matrices for different constituent types. Um, but there's still some reason to think that we didn't have enough power, and that was we're still at heart using this very simple compositional structure where we're just concatenating the two children's vectors and multiplying it by a matrix. So that means the two words, um, didn't interact with each other in terms of their meaning. Um, but, um, it seems like we want to have them interact in their meaning, right? So in particular if you if you think about human languages and the kind of things that people look at in linguistic semantics, you get words that appear to be kind of modifiers or operators. So the word very, sort of doesn't mean much by itself. I mean it means something like strengthening or more so or something like that, but you know, it doesn't really have a meaning, right? It doesn't have any denotation. You can't show me very things, right? You can show me chairs and pens and, um, children but you can't show me very things, that the meaning of very seems to be that something comes after it, good. And this has a sort of an operator meaning of increase on the scale of this thing, and it can increase on the scale in either direction. You can have very good or very bad. So if we want to capture that kind of semantics, it seems like we can't capture that kind of semantics by just concatenating two vectors and multiplying them by a matrix. It seems like what we really want to say is very is gonna grab hold of the meaning of good and modify it in some ways to produce a new meaning for very good. And indeed, that's the kind of approach that's typically, um, been done in linguistic semantics. So in linguistic theories of semantic, you would normally say, okay, good has a meaning, very is a function that takes in the meaning of good and returns a meaning of very good. And so we wanted to have, um, a way of putting that into a neural network. And so to try and come up with a new composition function as to how to do that. And there are various ways that you could think about doing that and other people have had a couple of different attempts. But essentially what was in our head is well, we have word vectors, and if we want to say that very takes the meaning of good and returns a new meaning, the kind of obvious thing to do is to say very has a matrix attached to it because then we could use the, the very matrix and multiply it by the good vector and we get a new, um, vector coming out. And so then, well, the problem is, uh, which- then which words have vectors and which words have matrices? And that's kind of, um, hard to know the answer to. I mean, in particular, um, words that act as operators can, um, often themselves be modified. Um, and so, um, that you know, good can also- good also is a operator, right? So that from a sort of a person, you can have a good person and that's sort of also an operator, and very is modifying that good. So the idea we came up with is let's not try and predetermine all of this. Why don't we say that every word and every phrase has connected to it both matrix and vector. So here's our very good movie. So for each word, we have a vector meaning and it has a matrix meaning, and then as we start to build up phrases like very good, they're also going to have a vector meaning and a matrix meaning. And so what we proposed was, um, so first of all, we, we would like to be able, um, to calculate, um, the vector meanings. So to work out the vector meaning of a phrase like very good. Each word has a matrix meaning. And so we're going to combine their opposing matrix and vector meaning. So we're going to take the matrix meaning of good and multiply it by the vector meaning of very. And we're going to take the matrix meaning of very and multiply it by the vector meaning of good. And so we're going to have both of those two things. And then we're going to have a neural network layer like before that combine those together. And so that's sort of in the red box. Then those two things were concatenated, and put through the kind of neural network layer we had before to give us a final vector meaning on this, for the phrase. And then we also needed a matrix meaning for the phrase. And so for the matrix meaning for the phrase, um. We did this kind of simple model which maybe actually wasn't very good which was to say, let's just concatenate the two matrices of the- um, the constituents, multiply them by another matrix and that's then going to give us a matrix, um, version of the parent node. And so this was gave us our new more compo- more powerful composition procedure. Um, this did seem like it could do some kind of good things that captured, uh, uh, sort of operator semantics where one word modified the meaning of another word. Um, so here's a kind of a neat thing that we were able to do with this. Um, that we are wanting to be able to work out the semantics of an operator modifying another word. So unbelievably annoying, unbelievably awesome, unbelievably sad. Um, not annoying, not awesome, not sad. [NOISE] And so this was contrasting, our, um, old model versus the new model. And this scale is a scale of positive to negative. So this is completely negative to completely positive, all right? And so the kind of contrast you get, uh, that for, um, not annoying that the simple model thought that this was pretty negative, whereas the new model thinks this is pretty neutral in meaning, and that seems to be reasonably correct. Um, but not sad, that means it's a little bit positive and both models were trying to capt- capture that, that- you know, the results here are a little bit ambivalent, but- but it sort of seems that they sort of go a little bit in the direction of what we want. Yes. What is the ground truth in the "not sad" example? Oh, yeah. So this ground truth was- we actually asked a whole bunch of human beings to say, um, rate the [LAUGHTER] meaning of not sad, on this scale of 1 to 10. Maybe this wasn't a very good clear task because as you can see it, bounced around a lot [LAUGHTER] as to, um, what kind of ratings we were getting for things. But yeah, that was actually kind of getting human judgments. Um, we also then use this, um, model to say, "Well, could we do, um, semantic classification tasks?" So if we wanted to understand relations between different noun phrases, so this was a dataset where, um, there were relations marked between two noun phrases. My apartment has a pretty large kitchen that that was seen as an example of a component-whole, a part of relationship between the two noun phrases, and there were other relationships between different kinds of noun phrases. So if it was the movie showed wars, um, that that was then a message topic, so there's some communication medium that contains some topic relationship. And so we were using this kind of neural network to sort of build our meaning representations and then putting them through another neural network layer as a classifier to see how well we did. And so we got some sort of fairly good results on that. So this was a dataset that people had worked on with traditional NLP systems of different kinds of machine learning methods. But in some sense, you know, what we were interested in was we seem to be making progress in having a better semantic composition system that our old recursive neural network was getting about 75 percent, and then our new one was getting about 79 percent, which we could sort of push up further by putting more features into our system. So that was progress, um, but we didn't stop there. We kept on trying to come up with better ways of doing things. And so even though things worked fairly well here, it sort of seemed like this way of doing matrices wasn't necessarily very good. It sort of had two problems. One problem was it introduced a humongous number of parameters because, you know, for just about everything that we've done, otherwise, words have had a vector and well, maybe sometimes we use quite high dimensional vectors like 1,024, um, [NOISE] but, you know, that's a relatively modest number of parameters. Whereas once we introduce this matrix here, we've got that number of squared additional parameters for every word. And essentially because of that number of parameters to be able to compute this model at all, we were making the vector size small. So what we were actually using was that these were just 25-dimensional vectors so that the 25 squared, 625, still safe, sort of decently within the range in which we could compute. So that was the first problem. The second problem is, we didn't really have very good ways of sort of building up the matrix meaning of bigger phrases. I mean, you know, this sort of seems something simple we could do but it didn't, you know, feel a very good way of getting a matrix meaning of a phrase. So we sort of wanted to come up with some other way of doing things that could fix both of those problems. And then, that led into work on recursive neural tensor networks. Um, and there's a kind of a nice idea here of these neural tensors, which is an idea that's actually been used in other places including, um, work on sort of putting vector embeddings of knowledge graphs and so on, which is a kind of a bit of a nice idea. So I wanted to sort of show a bit of how this model works. Um, and but just to say, first, a place where we applied this model was on the problem of sentiment analysis. Now, I think the term sentiment analysis has come up a few times as something you can do and actually which I then mentioned in the last, um, lecture. But I think we've never really talked for five minutes, um, in this class on sentiment analysis, so I'll, um, give you this as an example of that. Um, sentiment analysis has actually been a really common and important application in natural language processing. Um, you're looking at a piece of text and you're sort of saying, "Is it, um, positive or negative?" Um, and that's just something that's very useful for lots of, um, commercial applications of looking at product reviews or doing brand, um, awareness and things like that of sort of looking at sentiment connected to things. And to some extent doing sentiment analysis is easy, right? That you can kind of say, "Well, look at a piece of text. If you see words like loved, great, impressed, marvelous, then it's positive. It's a positive review. And if it's saying, bad and awful, then it's a negative review." And to some extent that's the baseline of sentiment analysis that you can use just either selected word features or all words in a bag of words. And if you do that, you don't actually do that badly, um, in sentiment analysis. If you have longer documents, just looking at bags of words can give you 90 percent in sentiment analysis. But on the other hand, things often do get trickier, right? So, um, this is from Rotten Tomatoes. With this cast and the subject matter, the movie should have been funnier and more entertaining. And if you sort of pretend you're a bag of words model, the only words in this that are sort of clearly sentiment-laden words, uh, entertaining and funnier, and both of those are pretty positive words, um, but it's fairly obvious that this actually is meant to be a bad review of the movie. And so well, how are we meant to know that? Well, it sort of seems again like what we have to do is meaning composition. We have to get sort of phrases like "should have been funnier," and then realized that that's actually a negative meaning for a phrase. And so we wanted to explore how we could look at those sort of meanings for phrases and explore building up those meanings as doing meaning composition over trees. Um, so the first thing we did, um, was we built a treebank of sentiment trees where we got people to rate sentiment. And so this led to the Stanford Sentiment Treebank, which is still a dataset you often see used in, um, various of evaluations with a whole bunch of datasets. Indeed, it showed up in decaNLP last week. Um, so what we were doing in this was taking, um, sentences which were Rotten Tomatoes sentences from movies. We were parsing them to give tree structure and then we were asking Mechanical Turkers to rate the different phra- the different words and phrases on a sentiment scale of very positive to very negative. So lots of stuff is white because it's just not sentiment-laden, right? There's words that are the, and there's phrases like the movie and the movie was- which don't really have any sentiment, but then you have pieces that are sort of very positives pieces of tree and negative pieces of tree that are then shown in the blue and the red. And- so typically in sentiment datasets, people have only labeled the entire sentence to say, "This is a positive sentence or a very positive sentence. This is a negative sentence or a very negative sentence." Crucially, what we were doing differently here is every phrase in the sentence according to our tree structure was being given a label for its positivity or negativity. Um, and perhaps not surprisingly, just the fact that you have a lot more annotations like that, um, just improves the behavior of classifiers because you kind of can do better attribution of which words in a sentence are positive or negative. Um. So, these were um were results of sort of preceding models. So the green is a Naive Bayes model except it not only uses individual words, but it uses pairs of words. It turns out if you're building a traditional classifier and you wanna do sentiment analysis as opposed to something like topic classification, you get a lot better results if you also use word pair features. And that's because it does a baby bit of um composition for you. You don't only have features for not an interesting, but you can have a feature for not interesting and that lets you model a certain amount of stuff. Um, and then these are our older generations of neural networks, our ori- original tree structured neural network and our matrix vector one. And so simply having- for these sort of fixed models, simply having the richer supervision that comes from our new treebank, it's sort of moved up the performance of every model. So even um, for just the um, Naive Bayes model's performances going up about four percent um, because of the fact um, that it now knows more about which particular words are positive or negative in the sentences. Um, but still none of these performances are really great. Um, so we still thought that well can we build better models of how to do this. Um, in particular, if you look at sentences with sort of various kinds of negation you know, things like should've been funnier, these models in general still couldn't capture the right meanings for them. And so that led into our fourth model of how to do this, which is this idea of recursive neural tensor networks. Um, and so what we wanted to be able to do is go back to just having um, meanings of words be vectors, but nevertheless despite that to be able to have a meaningful phrase where the two vectors um, acted on each other. And well, you know, this kind of, this is the picture of what we did when we were doing attention in a bi-linear way, right? We had vectors for two words. We stuck a matrix in between and we used that and gave an attention and got an attention score out. So that let these two vectors interact with each other, but it only produced one number as the output. But there's a way to fix that, which is to say well rather than having a matrix here, what we could stick here is a three-dimensional cube, which physicists and deep learning people now call a tensor, right? So a tensor is just higher multi-dimensional array um, in computer science terms. Um, so if we sort of made that a tensor, you know, it's like we have sort of multiple layers of matrix here. And so the end result of that is we get one number here and one number here. So in total, we get out a size two vector, which is all we need in my baby example where baby examples, where we only have these two component vectors for words. But in general, we have a tensor with the extra mention dimension of the size of our word vector. And so therefore, we will get a word vector, w hat, we will get a phrase vector out from the composition that's the same size of the input vectors and will allow them to interact with each other in working out the meaning of the entire thing. Okay. Um, all right. So at that point um, we use the resulting vectors um um, so we had our neural tensor network. We actually combined it together with the sort of previous kind of layer we used to have, our sort of first RNN, maybe you didn't need to do this, but we just decided to add that in as well, put things through a nonlinearity and that was then giving us our new representation of phrases. We built that up the tree and then at the end, we could classify the meaning of any phrase um, in the same kind of way with softmax regression and we could train these weights with gradient descent to predict sentiment. And so this actually worked pretty nicely. I mean in particular, it didn't so really work any better with just the sentence labels. But if we train the model with our treebank, we could then get a kind of- of whatever that is about another couple of percent in performance, and so that seemed good. And so in particular, it seemed to do a much better job of actually understanding meaning composition. So here's the kind of sentence where you have there are slow and repetitive parts, but it has just enough spice to keep it interesting. And the model seen here is pretty good at understanding. Okay, this part of the sentence is negative, this part of the sentence is positive, and actually when you stick the two halves together, the end result is a sentence that's positive in meaning. But focusing in a little bit more what seems like it's especially good was for the first time this actually did seem like it could do a better job of working out sort of what happens when you do things like negation. So here we have it's just incredibly dull and it's definitely not dull. So if it's definitely not dull, that's actually means it's good, right? Can we work out the meaning of, it's definitely not dull? And so um, these, this is sort of showing sort of what happens when you have a negative, a negative sentence that's further negated. So if you go from um, so if you sort of do a annex- negation of a negative thing should become moderately positive, right? So that if you have dull is negative and if you say not dull, it doesn't mean it's fantastic, but it means it's moderately positive. And so for either a kind of Naive Bayes model or our preceding models, they weren't capable of capturing that of sort of going from dull to not dull your, your meaning computation did not come out any more positive. Whereas this sort of neural tensor network was capturing the fact that not dull meant that it was reasonably good. So that was progress. Um, yes. So I think that's as much as I'll um show you really now about applying these tree structured neural networks um, to natural language. Um, you know, I think the summary I sort of said at the beginning um is that I think you know, they're kind of interesting ideas and linguistic connections here. I mean, for various reasons, these ideas haven't been um, pursued a ton in recent years of natural language processing. You know, one is in all honesty people have found that um, once you have high dimensional vectors in things like the kind of sequence models that we've looked at, whether it's meaning things like the sort of LSTM models or any of the more recent contextual language models, those work incredibly well um, and it's not, it's not clear that overall these models work better. The second reason is sort of a computational reason, which is um, GPUs work great when you're doing uniform computation. And the beauty of having something like a sequence model is that there's uh, there's just one determinant computation you are doing along the sequence or in the convolutional neural network, there's one determinant um, computation you're doing up um, through your convolutional layers and therefore, things can be represented and computed efficiently on a GPU. The huge problem with these kind of models was what computations you are going to do depended on which structure you are assigning to the sentence, and every sentence was going to have a different structure, and so therefore, there was no way to batch the computations over a group of sentences and have the same computations being done for different sentences, it sort of undermined the ability to sort of efficiently build these models in the large. The thing I thought I'd just sort of say a moment about at the end. Um, the funny thing is that although these haven't been used much for, um, language in the last few years, um, that they've actually had some use and found different applications in different places, um, which is just sort of seen kind of cute. Um, so this is actually an application from physics. Um, and I think I'm going to just have to read this and so I have no idea what half the words mean. Um, but, um, what it says is by far the most common structures seen in collisions at the Large Hadron Collider are collimated sprays of energetic hadrons referred to as jets. These jets are produced from the fragmentation and hadronization of quarks and gluons as described by quantum chromodynamics. Anyone knows what that means? Um, I hope you're following along here. Um, one compelling physics challenge is to search for highly boosted standard model particles decaying hadronically. Unfortunately there's a large background from jets produced by more mundane, um, QCD, that's quantum chromodynamics processes. In this work, we propose instead a solution for jet classification based on an analogy between quantum chromodynamics and natural languages as inspired by several works from natural language, um, processing. Much like a sentence is composed of words following a syntactic structure organized as a parse tree, a jet is also composed of 4-momenta following a structure dictated by QCD and organized via the clustering history of a sequential co- combination jet algorithm. Um, so anyway um, yeah with these jets you see they're getting a tree structure over them and they're using the tree recursive neural network, um, to model it. Um, well that's a little bit far afield but to show you just one more example, um, that another place where these models have actually being quite useful is for doing things in programming languages. And I think in part this, this is because the application is easier in programming languages. So unlike in natural language where we have this uncertainty as to what is the correct parse tree because there's a lot of ambiguity in natural language, in programming languages, um, the parse trees are actually pretty determinant. Um, so a group of people at Berkeley, Dawn Song and her students have worked on doing programming language translation by building tree recursive neural network encoder-decoders. So that you're building up a tree structured neural network representation of a program in one language. This is a CoffeeScript program and then you're wanting to build a tree to tree model which is then translating that to a program in a different language. And they've been able to do that and get good results. Um, I was too lazy to retype this table. So this is probably a bit, bit hard to read. But what's it's contrasting is for a number of programs this is the sort of CoffeeScript to JavaScript, um, um, translation. They're comparing using tree to tree models. Um, and then using sequence to sequence models and then they tried both other combinations, sequence to tree and tree to sequence. Um, and what they find is you can get the best results with the tree to tree neural network models. And in particular these tree to tree models are augmented with attention so they have attention like we talked about the sequence to sequence models where you're then being able to do attention back to nodes in the tree structure which is a pretty natural way of doing translation. And indeed what these results show is if you don't have- that's right these results show is if you don't have the attention operation it doesn't work at all. It's too difficult, um, to get things, um, sort of done if you've just sort of trying to create a single tree representation and then say generate the tra- the translation from that. But if you can do it with this sort of putting attention into the different modes, um, that's great. Um, you might- If you know what CoffeeScript is you might, um, feel like wait that's cheating slightly because CoffeeScript is a bit too similar to JavaScript. Um, but they've also, um, done it in other languages. So this is going between Java and C# and this is a sort of handwritten Java to C# converter that you can download from GitHub if you want but it doesn't actually work that well. Um, and they're able to show, the- they're able to build a far better, um, Java to C# translator, um, doing that. Um, so that's actually kind of cool. And it's good to know that tree structured recursive neural networks are good for some things. Um, so I'm pleased to see work like this. Okay. I'm, I'm, just about done but I thought, um, before, um, finishing, I'd just mention one other [NOISE] thing which is sort of nothing to do with natural language processing precisely but it's about AI. Um, but I wanted to sort of put in a little bit of advertisement. Um, that's something that a number of us have been working on very hard for the last year or so, is developing, um, a new Stanford Institute for Human-Centered Artificial Intelligence. And actually the launch of this institute is going to be on Monday of exam week, just when you're maximally concentrating on things such as this. Um, but our hope is that we can have a lot of new activity around artificial intelligence, taking a much broader perspective to artificial intelligence, um, which is centrally viewing it from the viewpoint of humans and working out, um, I'll- exploring a much broader range of issues that embrace a lot of the interests of the rest of the university whether it's the social sciences and humanities, or also variously in professional schools like the law school and the business school. Um, so let's just quickly say a minute about that. Um, that the, the sort of motivating idea is that sort of for most of my life sort of AI seemed like a kind of a fun intellectual quest as to whether you could write bits of software that did anything, um, halfway intelligent but that's clearly not what's going to be, what's happening for the next 25 years. That we're now at this point in which artificial intelligence systems are being unleashed on society. Um, and well hopefully they do some good things but as we've increasingly been seeing there are lots of also lots of opportunities for them to do bad things. And even if we're not imagining Terminator scenarios, there are just lots of places where people are using machine learning and AI algorithms for making decisions. Some of the worst ones are things like sentencing guidelines in courts where you have very biased algorithms making bad decisions and people are starting to become a lot more aware of the issues and so effectively we are wanting to have this institute sort of embracing a lot of the work of social scientists, the ethicists and other people to actually explore how to have an AI that's really improving human lives rather than having the opposite effect. And so the three themes, um, that we're, um, mainly emphasizing for this institute is the first one in the top left is developing AI technologies but we're particularly interested in making linkages back to human intelligence. So cognitive science and neuroscience that when a lot of the early formative work in AI was done including all of the early work in neural networks like the development of back propagation, it was actually largely done in the context of cognitive science. Right? And that was sort of a linkage that tended to get lost in the '90s and 2000s statistical machine learning emphasis. And I think it would be good to renew that. Um, the top right, um, there's paying much more attention to the human and societal impact of AI and so this is looking at legal issues, economic issues, labor forces, ethics, um, green power, politics, whatever you are. But then down at the bottom is something where it seems like there's just kind of enormous opportunities to do more which is, um, how can we build technology that actually augments human lives. Like to some extent here tech- we've got technology with AI augmenting human lives. So all of your cell phones have speech recognition in them now. So you know that's AI, um, that can augment, um, your human lives. But there's a sense of which not very much of artificial intelligence has actually been put into the service of augmenting human lives. Like most of what a cell phone has on it is still sort of clever and cute stuff done by HCI people and designers which is very nice a lot of the time when you're using your map program or something but we don't really have much AI inside these devices helping to make people's lives better. And so we're hoping not only for individuals when applications like health care, um, to be doing much more of sort of putting artificial intelligence into human-centered applications. Um, anyway that's my brief advertisement. Um, look it out for this while you're not studying for your exams. And I think there'll be sort of lots of opportunities, um, for students and others to be getting more involved in this in the coming months. Okay. Thank you very much. Um, and I will see you later, um, at the poster session. [APPLAUSE].
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_5_Dependency_Parsing.txt
Okay. Let's get started again. Okay. So welcome back to, um, week three of CS224N. Okay. So we- we've got a bit of a change of pace today after week two. So, um, this week in week three, we're actually going to have some human language, and so this lecture has no partial derivative signs in it. And so we'll be moving away, um, from sort of working out the so technicalities of doing, um, new networks and back propagation, um, and a sort of math heavy week two. So then, this week, what we actually want- well, in today's lecture, we want to look at, well, what kind of structures do human language sentences have, and how we can build models that, um, build that kind of structure for sentences that we see. Um, so first of all, I'm gonna sort of explain and motivate a bit about, um, structure of human language sentences. So, that's kind of like, um, linguistics in 20 minutes or something. Um, then going particularly focusing on dependency grammars, and then gonna present a method for doing dependency structure, dependency grammar parsing called transition-based dependency parsing. And then talk about how you can make neural, um, dependency parsers. Um, so, um, going on just, you know, a couple of announcements. So, assignment two was due one minute ago, so I hope everyone's succeeded, um, in getting assignment two out of the way. If you're still working on it, do make sure to make, um, use of the office hours and get help for that. Coming out just today is assignment three. Um, assignment three, um, is basically about this lecture. Um, so, [LAUGHTER] in assignment three, what you're doing is building a neural dependency parser, and so we hope that you can put together what you learned about neural networks last week and the content of today, and jump straight right in to building a neural dependency parser. Um, the other thing that happens in assignment three is that, we start using a deep learning framework PyTorch. So, for doing assignment three, instruction zero, and this is in the PDF for the assignment, is to install PyTorch as a Python package, and start using that. Um, so we've attempted to make assignment three sort of be a highly scaffolded tutorial, where you can start to learn how to do things in PyTorch by just, um, writing a few lines of code at a time. Hopefully that works out for people. Um, if you have any issues with, with that, um, well, obviously, you can send Piazza messages, come to office hours. I mean, the one other thing you could think of doing is that there's sort of a one hour introduction to PyTorch on the PyTorch site, where you down- where you're directed for installing PyTorch, and you could also look at that if that was maybe helpful. Um, now the final mentions, yes. So, um, final projects, um, you know, we're going to sort of focus on those more in week five, but if it's not bad to be thinking about things you could do, if you're under a custom final project. You're certainly encouraged to come and talk to me or the TAs. We have under the sort of office hours page on the website, a listing of the expertise of some of the different TAs. Um, since I missed my office hours yesterday, I'm gonna have a shortened office hour tomorrow from 1:00 to 2:20. Um, that's at the same time as the, um, normal CS224N, um, office hours, so you can kind of come for any reason you want, but it might be especially good to come to me if you want to talk about, um, final projects. Okay. So, let's leap in and start talking about the structure of sentences. And so, I just sort of want to explain something about human language sentence structure, and how people think about that structure, and what kind of goals then people in natural language processing have of sort of building structure to understand the meaning of sentences. Um, all of the examples I'm going to give today are in English, um, because that's the language that you're all expected to have some competence in. But this really isn't meant to be sort of facts about English. This is meant to be sort of ideas of how you can think about the structure of human language sentences that are applied to all sorts of languages. Okay. So in general, there are two different ways that linguists have thought about the structure of sentences, though there's some relations to them. One of them is called phrase structure, or phrase structure grammars. And if you vaguely remember from CS103 if you did that, when you spent about the lecture on context-free grammars, um, phrase structure grammars are using the tools of context-free grammars to put structures over sentences. So, I'm first of all going to just briefly introduce that, so you've seen it, but actually the main tool that we're going to use in this class and for assignment three, is to do put dependency structures over, um, sentences, so I'll then go about that. So, the idea of phrase structure is to say that sentences are built out of units that progressively nest. So, we start off with words that, cat, cuddly, et cetera, and then we're gonna put them into bigger units that we call phrases, like "The cuddly cat by the door", and then you can keep on combining those up into even bigger phrases, like, "The cuddly cat by the door." Um, [NOISE] Okay, that's that. So, how does this work? Well, so the idea of it, and this is sort of the way linguists thinks, is to say, "Well, here's this language, which, you know, might not be English. It might be Oaxacan or some other language. What kind of structure does it have? And well, we could look at lots of sentences of the language. And so the linguist is gonna think, "Well, I can see, um, patterns, like the cat, a dog, the dog, a cat, et cetera. So, it's sort of seems like there's one word class here, which linguists often referred to as determiners. Um, they're also referred to as article sometimes in English. There's another word class here of nouns. And so, what I- to capture this pattern here, it seems like we can make this unit, um, that I see all over the place in language, um, which is made of a, um, a determiner, followed by a noun. So, I've write, um, a phrase structure grammar role, a context-free grammar role of- I can have a noun phrase that goes to a determiner, and a noun. Okay. But, you know, that's not the only thing that I can, um, see. So, I can also see, um, other examples in my language of the large cat, or a barking dog, or the cuddly cat, the cuddly dog. So, that seems that I need to put a bit more stuff into my grammar. So, maybe I can say from my grammar that a noun phrase goes to a determiner, and then optionally, you can put in an adjective, and then you can have a noun. And then I poke around a little bit further and I can find examples like the cat in a crate, or a barking dog by the door. And I can see lots of sentences like this. And so I want to put those into my grammar. But at that point, I noticed something special, because look, here are some other things, and these things look a lot like the things I started off with. So, it seems like, which sort of having a phrase with the same expansion potential that's nested inside this bigger phrase, because these ones can also be, um, expanded, right? I could have something like the green door something in here. So, I just wanna capture that in some way. So, maybe I could say that a noun phrase goes to a determiner, optionally an adjective, a noun, and then a something else, which I'll call a prepositional phrase. And then I'm gonna write a second rule saying that a prepositional phrase goes to a preposition, that's gonna be these words here, um, followed by a noun phrase. So then I'm reuse- [NOISE] I'm reusing my noun phrase that I defined up here. So then I could immediately generate other stuff. I can sort of say, "The cat by the, the large door." Or indeed I could say, "The cat by the large crate." Um, "The cat by the large crate on the table", or something like that, because once I can have the prepositional phrase includes a noun phrase, and a noun phrase includes a prepositional phrase, I've already got something that I can kind of recursively go back and forth between noun phrases, and I can make infinitely big sentences, right? Yeah? Yeah? So, I could write something like, yeah, "The cat by the large crate on the, um, large table, um, by the door." Right. I can keep on going and make big sentences. And I could say, well, I've got a- I don't have space to fit it on this slide, but I've got an analysis of this according to my grammar, where that's a noun phrase goes to a determiner noun prepositional phrase. The prepositional phrase goes to a preposition, and a noun phrase, and this noun phrase goes to a determiner, adjective, noun prepositional phrase. And that goes to a preposition, and another noun phrase, and I keep on going and I can produce big sentences. Okay. You know, that kind of then continues on, because, um, you know, I can then start seeing more bits of grammar. So, I could say, "Well, I can now talk to the cat." Um, and so if I wanna capture, um, this talking to a cat here, well, that now means I've got a verb, because words like talk and walk are verbs. And then talk to the cat, it seems like after that, it could become a prepositional phrase. And so I could write another rule saying that a verb phrase goes to a verb followed by a prepositional phrase. And then I can make more bigger sentences like that. And I could look at more sentences of the language and start building up these, these context-free grammar rules to describe the structure of the language. And that's part of what linguists do, and different languages, um, have different structures. So, um, for example, like in this, uh, little grammar I've had and in general in English, um, what you do, what you find is that prepositional phrases following the verb. But if you go to a different language like Chinese, what you find is the prepositional phrases come before the verb. And so, we could say okay, there are different rules for Chinese, um, and I could start writing a context-free grammar for them. Okay, beauty. Um,so that's the idea of context-free grammars, and actually, you know, this is the dominant approached linguistic structure that you'll see if you go and do a linguistics class in the linguistics department, people make these kinds of Phrase Structure Grammar trees. Um, but just to be contrary, no, it's not actually just to be contrary, it's because this alternative approach has been very dominant in computational linguistics. What I'm going to show you instead, um, is the view point of dependency structure. So, the idea of dependency structure is rather than having these sort of phrasal categories, like, noun phrases and prepositional phrases, and things like that, we are going to directly, um, represent the structure of sentences by saying, how words, how arguments or modifiers of other words in a recursive faction. Which is sort of another way of saying how the dependence on other words. So, we have a sentence, ''Look in the large crate in the kitchen by the door''. And if we want to we can give these word, words word classes, so we can still say this is a verb, and this is a preposition, and this is a determiner, and this is an adjective, and this is a noun. But to represent the structure, what we're going to say is, "Well, look here is the the root of this whole sentence." So, that's where things start. Um, and then, well, where are we going to look is in the large crate, so that is a dependent of look. And well, if we- then we have for the crate, it's got some modifies its a large crate. So, that's a dependent of crate. Its the large crate, that's a dependence of crate. And in this system of dependencies I'm going to show you, we've got in as kind of, um, a modifier of crate in the large crate. I could come back to that. Well, but this crate has its own modification, because it's a crate in the kitchen. So, we have, in the kitchen, as a modifier of crate. And it's the kitchen in the kitchen, these are dependence of crate. And well, then we have this next bit by the door. And as I'll discuss in a minute, well, what does the by the door modifying? It's still modifying the crate, it saying, ''It's the crate by the door.'' Okay. So, the by the door is also a dependent of crate, and then we've got the structure of dependencies coming off of it. Okay. And so that's then, um, the structure you get may be drawn a little bit more neatly when I did that in advance like this. And so we call these things, uh, dependency structure. And so crucially, what we're doing here, um, is that we're- sorry, I had two different examples. [NOISE] different examples. [LAUGHTER] Um, um, what we're doing is saying, what, what words modify other words? And so, that allows us to sort of understand how the different parts of the sentence relate to each other. And so, overall, you know, then- let me just so say here, you might want to why do we need sentence structure? You know, the way, um, language seems to work when you're talking to your friends is that you just blab of something, and I understand what you're saying, and, um, what goes on beyond that, um, is sort of not really accessible to consciousness. But well, to be able to have machines that interpret language correctly, we sort of need to understand the structure of these sentences, because unless we know what words are arguments and modifiers of other words, we can't actually work out what sentences mean. And I'll show some examples of that as to how things go wrong immediately, because actually, a lot of the time there are different possible interpretations you can have. And so, in general, our goal is, you know, up until now we've sort of looked at the meaning of words, right? We did word vectors, and we found that words there was similar meaning, and things like that. Um, and you can get somewhere in human languages with just saying words. I mean you can say, "Hi", and friendly, um, and things like that, but you can't get very far with just words, right? The way human beings can express complex ideas and explain and teach things to each other, is you can put together words to express more complex meanings. And then, you can do that over and over again recursively to build up more and more complex meanings, so that by the time you're reading the morning newspaper, you know most sentences are sort of 20-30 words long, and they're saying, um, some complex meaning, like you know, "Overnight Senate Republicans resolve that they would not do blah blah blah blah.'' And you understand that flawlessly, by just sort of putting together those meanings of words. And so, we need to be able to know what is connected to what in order to be able to do that. And one of the ways of saying, um, that's important is saying, ''What can go wrong?'' Okay. So here, is a newspaper article. Uh, ''San Jose cop kills man with knife''. Um, now, this has two meanings and the two meanings, um, depend on, well, what you decide depends on what, you know, what modifies what? So, what are the two meanings. Meaning one. The cop stabs the guy. [LAUGHTER] The cop stabs the guy. Right. So, meaning one is the cop stabs that guy. So, what we've got here is, we've got the cops that are killing. So, this is what we'll say is the subject of kill, is the cops, and I'll just call them the San Jose cops here. And well, there's what they kill which say that, the man is an object of killing. Um, and then while one person is the, the cop using knife to kill the person. And so that's then that this is, um, modifier and here if we complex we call it an instrumental modifier to say that the cops are killing people with a knife. That's one possible analysis. Okay. Then, there's a second meaning sentence can have. The second meaning sentence can have. [NOISE] Okay. The second meaning the sentence can have is, that's the man has a knife. So, um, in that case, what we wanna say is, well, you know, is this word man, and this man has, uh, noun modifier, um, which is sort of saying something that the man possesses, and then this dependency is the same, and it's a man with a knife. Okay. And so, the interpretations of these sentences that you can get depend on putting different structures over the sentences in terms of who is- what is modifying what? Um, here is another one that's just like that one. Um, scientists count whales from space. [LAUGHTER] Okay. So again, this sentence has two possible structures, right? [LAUGHTER] That we have, the scientists are the subject that are counting and the whales are the object. Um, and, well, one possibility is that this is how they're doing the counting, um, so that they're counting the whales from space using something like a satellite. Um, but the other possibility is that these parts are the same, this is the subject, and this is the object, but these are whales from space which, you know, we could have analyzed as a noun phrase goes to, um, and now, on a PP, you know, um, constituency grammar, but its dependency grammar we saying, "Oh, this is now a modifier of the whales, and that they are whales from space, um, that are starting to turn up as in the bottom example." Right? So, obviously what you want is this one is correct and this one is here wrong. Um, and so this choice is referred to as a prepositional phrase attachment ambiguity, and it's one of the most common ambiguities in the parsing of English, right? So, here's our prepositional phrase from space. And so in general, when you have prepositional phrases and before it you have verbs, and noun phrases, or nouns, that the prepositional phrase can modify either of the things that come beforehand, right? And so this is a crucial way in which human languages are different from programming languages, right? In programming languages, we have hard rules as to how you meant to interpret things that dangle afterwards, right? So, in programming languages, you have an else is always construed with the closest if. Well, if that's not what you want, um, you have to use parentheses or indentation or something like that. I guess, it's different in Python because you have to use indentation. But if we think of something like C or a similar language, right? Um, if you haven't used, um, braces to indicate, it's just deterministically, the else goes with the closest if. Um, but that's not how human languages are. Human languages are, um, this prepositional phrase can go with anything proceeding, and the hearer is assumed to be smart enough to work out the right one. And, you know, that's actually a pa- large part of why human communication is so efficient, right? Like, um, we can do such a good job at communicating with each other because most of the time we don't have to say very much, and there's this really smart person on the other end, um, who can interpret the words that we say in the right way. Um, so, that's where if you want to have artificial intelligence and smart computers, we then start to need to build language understanding devices who can also, um, work on that basis. That they can just decide what would be the right thing for form space to modify. And if we have that working really well, we can then apply it back to programming languages, and you could just not put in any braces in your programming languages, and the compiler would work out what you meant. Um, okay. So, this is prepositional phrase attachment. It's sort of seems maybe not that hard there, but you know, it, it gets worse, I mean, this isn't as fun an example, but it's a real example of a sentence from The Wall Street Journal actually. The board approved this acquisition by Royal Trustco Limited of Toronto for $0.27, $27 a share at its monthly meeting. Boring sentence, but, um, what is the structure of this sentence? Well, you know, we've got a verb here, and we've got exactly the same subject, and for this noun, um, object coming after it. But then what happens after that? Well, here, we've got a prepositional phrase. Here, we've got a prepositional phrase. You've just got a see four prepositional phrases in a row. And so, well, what we wanna do is say for each of these prepositional phrases what they modify, and starting off there only two choices, the verb and the noun proceeding as before. But it's gonna get more complicated as we go in, because look, there's another noun here, and another noun here, and another noun here. Um, so once we start getting further in there'll be more possibilities. Okay. So, let's see if we can, um, work it out. So, um, by Royal Trustco Limited, what's that modifying? [NOISE] Right. You see acquisition, so it's not the board approved by Royal Trustco Limited, it's an acquisition by Royal Trustco Limited. Okay. So, this one is a dependent of the acquisition. Okay. Um, now, we went to of Toronto, and we have three choices, that could be this, this, or this. Okay. So, of Toronto is modifying. Acquisition. [NOISE] Its acquisition of Toronto? [LAUGHTER] No, I think that's a wrong answer. Um. [LAUGHTER] Is there another guess for what of Toronto is modifying? Royal Trustco. Royal Trustco, right. So, it's Royal Trustco Limited of Toronto. So, this of Toronto is a dependent of Royal Trustco Limited. And Royal Trustco Limited, right, that's this again, sort of this noun phrase, so it can also have modifiers by prepositional phrase. Okay. For $27 a share is modifying acquisition, right? [NOISE] So now, we leap right back. [NOISE] I'm drawing this wrong. Now, we leap right back and, um, is now the acquisition that's being modified. And then finally, we have at its monthly meeting is modifying? [NOISE] Approved. Well, the approved, right? It's approved, yeah. It's approved that its monthly meeting. Okay. [NOISE] I drew that on, [NOISE] I drew that one the wrong way around with the arrow. Sorry, it should have been done this way. I'm getting my arrows wrong. [NOISE] Um, um. Okay. So that we've got this pattern of how things are modifying. Um, [NOISE] and so actually, you know, once you start having a lot of things that have choices like this, you stop having- if I wanna put an analysis ac- on to this sentence I've to work out the, the right structure, I have to potentially consider an exponential number of possible structures because, I've got this situation where for the first prepositional phrase, there were two places that could have modified. For the second prepositional phrase, there are three places that could have modified. For the fourth one, there are five places that could have modified. That just sounds like a factorial. It's not quite as bad as the factorial, because normally, once you've let back that kind of closes off the ones in the middle. And so, further prepositional phrases have to be at least as far back in terms of what they modify. And so, if you get into this sort of combinatorics stuff the number of analyses you get when you get multiple prepositional phrases is the sequence called the Catalan numbers. Ah, but that's still an exponential series. And it's sort of one that turns up in a lot of places when they're tree-like contexts. So, if any of you are doing or have done CS228, where you see, um, triangular- triangulation of, ah, probabilistic graphical models and you ask how many triangulations there are, that's sort of like making a tree over your variables. And that's, again, gives you the number of them as the Catalan series. Okay. But- so the point is, we ha- end up with a lot of ambiguities. Okay. So, that's prepositional phrase attachments. A lot of those going on. They are far from the only kind of ambiguity. So, I wanted to tell you about a few others. Um, okay, shuttle veteran and longtime NASA executive Fred Gregory appointed to board. Um, why is this sentence ambiguous? What are the different reading of this statement? [NOISE]. Yes? Uh, it's a better [inaudible] Okay. So, um, right answer. So, yeah there are two possibilities, right? That is either that there's somebody who's a shuttle veteran and a long time NASA executive, and their name is Fred Gregory, and that they've been appointed to the board. Um, or, um, the other possibility is that there's a shuttle veteran and there's a long time NASA executive, Fred Gregory, and both of them have been appointed to the board. And so, again, we can start to indicate the structure of that using our dependency. So, we can ether, um, say, okay, um, there's Fred Gregory and then this person is, um, a shuttle veteran and long ta- and whoops, and longtime NASA executive. Or we can say, well, we're doing appointment of a veteran and the longtime NASA executive, Fred Gregory. And so, we can represent by dependencies, um, these two different structures. Okay. Um, that's, um, one. Um, That one is not very funny again. So- so, here's a funnier example that illustrates the same ambiguity effectively. Um, so, here's precedence first physical. Doctor: No heart, cognitive issues. [LAUGHTER] Um, so, there isn't actually an explicit, um, coordination word here. But effectively in, um, a natural language or certainly English, um, you can use kind of just comma of sort of list intonation to effectively act as if it was an "And" or an "Or", right? So, here, um, we have again two possibilities that either we have issues and the dep- and the dependencies of- the dependencies of issues is that there are no issues. So, that's actually a determiner, ah, no issues. Um, and then it's sort of like no heart or cognitive issues. So, heart is another dependent. It's sort of a non-compound heart issues. And so, we refer to that as an independency, and then it's heart or, um, cognitive. Um, so that heart or cognitive is a conjoined phrase inside of this "No heart" or "Cognitive issues". But there's another possibility, um, which is, um, that the coordination is at the top level that we have "No heart" and "Cognitive issues". And, um, at that point, we ha- have the "Cognitive" as an adjective modifier of the "Issues" and the "No heart", the determiner is just a modifier of "Heart", and then these being conjoined together. So, um, "Heart" has a depend- has a coordinated dependency of "Issues". Okay. That's one one. Um, I've got more funny ones. Susan gets- [NOISE] [LAUGHTER] Okay. So, what the person [LAUGHTER] who wrote this intended to have is that there- we- Here we've got an adjective modifier ambiguity. So, the intended reading was, um, that "First" is an adjectival modifier of "First hand" and it's firsthand experience. Um, so, the "First hand" is a modifier of "Experience" and the "Job" is also a modifier of "Experience". And then we have the same kind of subject, object, um, reading on that one. Um, but unfortunately, um, this sentence, um, has a different reading, um, where you change the modification relationships. Um, and you have it's the first experience and it goes like this. Um. [LAUGHTER] Okay. [NOISE] One more example. Um, "Mutilated body washes up on Rio beach to be used for Olympics beach volleyball." Um, wha- what are- [LAUGHTER] what are the two ambigui- What are the two readings that you can get for this one? [NOISE] We've got this big phrase that I want to try and put a structure of to be used for Olympic beach volleyball, um, and then, you know, this is sort of like a prepositional phrase attachment ambiguity but this time instead of it's a prepositional phrase that's being attached, we've now got this big verb phrase we call it, right, so that when you've sort of got most of a sentence but without any subject to it, that's sort of a verb phrase to be used for Olympic beach volleyball which might be then infinitive form. Sometimes it's in part of CPO form like being used for beach volleyball. And really, those kind of verb phrases they sort of just like, um, prepositional phrases. Whenever they appear towards the right end of sentences, they can modify various things like verbs or nouns. Um, so, here, um, we have two possibilities. So, this to be used for Olympics beach volleyball. Um, what the right answer is meant to be is that that is a dependent of the Rio beach. So, it's a, um, modifier of the Rio Beach. Um, but the funny reading is, um, that instead of that, um, we can have here is another noun phrase muti- mutilated body, um, and it's the mutilated body that's going to be used. Um, and so then this would be, uh, a noun phrase modifier [NOISE] of that. Okay. Um, so knowing the right structure of sentences is important to understand the interpretations you're meant to get and the interpretations you're not meant to get. Okay. But it's, it's sort of, um, okay, you know, I was using funny examples for the obvious reason, but, you know, this is sort of essential to all the things that we'd like to get out of language most of the time. So, you know, this is back to the kind of boring stuff that we often work with of reading through biomedical research articles and trying to extract facts about protein-protein interactions from them or something like that. So, you know, this is, um, the results demonstrated that KaiC interacts rhythmically with SasA Ka- KaiA and KaiB. Um, and well, [NOISE] I turned the notification's off. [NOISE] Um, so, if we wanna get out sort of protein-protein interaction, um, facts, you know, well, we have this KaiC that's interacting with these other proteins over there. And well, the way we can do that is looking at patterns in our dependency analysis, and so that we can sort of, um, see this repeated pattern where you have, um, the noun subject here interacts with a noun modifier, and then it's going to be these things that are beneath that of the SasA and its conjoin things KaiA and KaiB are the things that interacts with. So, we can kind of think of these two things as essentially, um, patterns. [NOISE] I actually mis-edited this. Sorry. This should also be nmod:with. [NOISE] Um, we can kind of think of these two things as sort of patterns and dependencies that we could look for to find examples of, um, just protein-protein interactions that appear in biomedical text. Okay. Um, so that's the general idea of what we wanna do, and so the total we want to do it with is these Dependency Grammars. And so, I've sort of shown you some Dependency Grammars. I just want us to sort of motivate Dependency Grammar a bit more, um, formally and fully, right? So, Dependency Grammar, um, postulates the what is syntactic structure is is that you have, um, relations between lexical items that are sort of binary asymmetric relations which we draw as arrows, because they are binary and asymmetric, and we call dependencies. And there's sort of two ways, common ways, of writing them, and I've sort of shown both now. One way is you sort of put the words in a line and that makes it. He see, let's see the whole sentence. You draw this sort of loopy arrows above them and the other way is you sort of more represent it as a tree, where you put the head of the whole sentence at the top, submitted and then you say the dependence of submitted, uh, bills were in Brownback and then you say, um, the dependence of each of those. Um, so, it was bills on ports and immigration. So, the dependence of bills and were submitted words, the dependent of submitted and you're giving this kind of tree structure. Okay. Um, so, in addition to the arrows commonly what we do is we put a type on each arrow which says what grammatical relations holding them between them. So, is this the subject of the sentence? Is it the object of the verb? Is that a, um, a conjunct and things like that? We have a system of dependency labels. Um, so, for the assignment, what we're gonna do is use universal dependencies, which I'll show you more, a little bit more in a minute. And if you think, "Man, this stuff is fascinating. I wanna learn all about these linguist structures." Um, there's a universal dependency site, um, that you go and can go off and look at it and learn all about them. But, if you don't think that's fascinating, um, for what we're doing for this class, we're never gonna make use of these labels. All we're doing is making use of the arrows. And for the arrows, you should be able to interpret things like prepositional phrases as to what they're modifying just in terms of where the prepositional phrases are connected and whether that's right or wrong. Okay. Yes. So formally, when we have this kind of Dependency Grammar, we've sort of drawing these arrows and we sort of refer to the thing at this end as the head of a dependency. And the thing at this end as the dependent of the dependency. And as in these examples are normal expectation and what our policies are gonna do is the dependencies form a tree. So, it's a connected acyclic single, um, rooted graph at the end of the day. Okay. So, Dependency Grammar has an enormously long history. So, basically, the famous first linguists that human beings know about his Panini who, um, wrote in the fifth century before the Common Era and tried to describe the structure of Sanskrit. And a lot of what Panini did was working out things about all of the morphology of Sanskrit that I'm not gonna touch at the moment. But beyond that, he started trying to describe the structure of Sanskrit sentences. And, um, the notation was sort of different but, essentially, the mechanism he used for describing the structure of Sanskrit was dependencies of sort of working out these, um, what are arguments in modifies of what relationships like we've been looking at. And indeed, if you look at kind of the history of humankind, um, most of attempts to understand the structure of human languages are essentially Dependency Grammars. Um, so, sort of in the later parts of the first millennium, there was a ton of work by Arabic grammarians and essentially what they used is also kind of basically a Dependency Grammar. Um, so compared to that, you know, the idea of context-free grammars and phrase structure grammars is incredibly incredibly new. I mean, you can basically, um, totally date it. There was this guy Wells in 1947 who first proposed this idea of having these constituents and phrase structure grammars, and where it then became really famous is through the work of Chomsky, um, which love him or hate him is by far the most famous, um, linguist and also variously contributed to Computer Science. Who's head of the Chomsky hierarchy? Do people remember that 103? Yeah. Okay, the Chomsky hierarchy, the Chomsky hierarchy was not invented to torture beginning computer science students. The Chomsky hierarchy was invented because Chomsky wanted to make arguments as to what the complexity of human languages was, um. Okay. Yeah. So, in modern work, uh, there's this guy Lucie Tesniere. Um, and he sort of formalized the kind of version of dependency grammar that I've been showing you. So, um we sort of often talk about his work. And you know it's- it's long-term being influential and computational linguistics. Some of the earliest parsing work in US Computational Linguistics was dependency grammars. But I won't go on about that um more now. Okay. Um, just one, two little things um, to note. I mean, if you somehow start looking at other papers where their dependency grammars, people aren't consistent on which way to have the arrows point. There's sort of two ways of thinking about this um, that you can either think okay, I'm gonna start at the head and point to the dependent. Or you can say I'm going to start at the dependent and say what its head is, and you find both of them. Uh, the way we're gonna do it in this class is to do it the way Tesniere did it, which was she started the head and pointed to the dependent. Uh, sorry. I'm drawing that wrong. Whoops, um because discussion of the outstanding issues. So, really um, the dependent is sort of discussion. Um, okay. We go from heads to dependence. And usually, it's convenient to serve in addition to the sentence to sort of have a fake root node that points to the head of the whole sentence. So, we use that as well. Okay. Um, so to build a dependency pauses or to indeed build any kind of human language structure finders including kind of constituency grammar pauses, the central tool in recent work, where recent work kind of means the last 25 years has been this idea of tree banks. Um, and the idea of tree banks is to say we are going to get human beings to sit around and [NOISE] put grammatical structures over sentences. So, here are some examples I'm showing you from Universal Dependencies where here are some um, English sentences. I think Miramar was a famous goat trainer or something. And some human being has sat and put a dependency structure over this sentence and all the rest. Um, and with the name Universal Dependencies, this is just an aside. Um, Universal Dependencies is actually project I've been strongly involved with. But precisely what the goal of universal dependencies was is to say what we'd like to do is have a uniform parallel system of dependency description which could be used for any human language. So, if you go to the Universal Dependencies website, it's not only about English. You can find Universal Dependency analyses of you know, French, or German, or Finish, or Carsac, or Indonesian, um, lots of languages. Of course, there are um, even more languages which there aren't Universal Dependencies analyses of. So, if you have a- a big calling to say I'm gonna build a Swahili Universal Dependencies um, treebank, um, you can get in touch. Um, but anyway. So, this is the idea of treebank. You know, historically, tree banks wasn't something that people thought of immediately. This so- an idea that took quite a long time to develop, right? That um, people started thinking about grammars of languages even in modern times in the fifties, and people started building parses for languages in the 19, early 1960s. So, there was decades of work in the 60s, 70s, 80s, and no one had tree banks. The way people did this work is that they wrote grammars, that they either wrote grammars like the one I did for constituency of noun phrase goes to determiner, optional adjective noun. Noun goes to goat um, or the equivalent kind of grammars and a dependency format, and they hand built these grammars and then train, had parsers that could parse these sentences. Going into things, having a human being write a grammar feels more efficient. Because if you write uh, a rule like noun phrase goes to determiner optional adjective noun. I mean, that- that describes a huge number of phrases or actually infinite number of phrases. Um, so that you know, this is the structure of you know, the cat, the dog, or cat or dog, or large dog all those things we saw at the beginning. So, it's really efficient you're capturing lots of stuff with one rule. Um, but it sort of turned out that in practice that wasn't such a good idea, and it turned out to be much better to have these kind of treebank supporting structures over sentences. It's often a bit more subtle was to why that is because it sounds like pretty menial work um, building tree banks, and in some sense it is. Um, but you know, it turns out to be much more useful. I mean, so one huge benefit is that treebanks are very reusable. That effectively what they was in 60s, 70s, and 80s was that every different you know, people who started about building a parser invented their own notation for grammar rules which got more and more complex, and it was only used by their parser and nobody else's parser. So, there was no sharing and reuse of the work those done by human beings. Well, once you have a treebank, it's reusable for all sorts of purposes that lots of people build parsers format. But also other people use it as well like linguists now often used tree banks to find examples of different constructions. Um, but beyond that, this sort of just became necessary once we wanted to do machine learning. So that if we want to do machine learning, we want to have data that we can build models on. In particular, a lot of what our machine learning models exploit is how common are different structures. So, we want to know about the commoners and the frequency of things. Um, but then treebanks gave us another big thing which is, well, lots of sentences are ambiguous, and what we want to do is build models that find the right structure for sentences. If all you do is have a grammar you have no way of telling what is the right structure for ambiguous sentences. All you can do is say hey that sentence with four prepositional phrases after it that I showed you earlier, it has 14 different parsers. Let me show you all of them. Um, but once you have um, treebank examples, you can say this is the right structure for this sentence in context. So, you should be building a machine learning model which will recover that structure, and if you don't that you're wrong. [NOISE]. Okay. Um, so that's treebanks. Um, so how are we gonna do build dependency parsers? Well, somehow we want models that can kind of capture what's the right parse. Just thinking about abstractly, you know, there's sort of different things that we can pay attention to. So, one thing that we can pay attention to is the sort of actual words, right? Discussion of issues. That's a reasonable thing. So, it's reasonable to have issues as dependent of discussion um, where you know, discussion of outstanding. That sounds weird. So, you probably don't want that dependency. Um, there's a question of how far apart words are. Most dependencies are fairly short distance. They not all of them are. There's a question of what's in between. Um, if there's a semicolon in between, there probably is an a dependency across that. Um, and the other issue is sort of how many arguments do things take? So, here we have was completed. If you see the words was completed, you sort of expect that there'll be a subject before of the something was completed, and it would be wrong if there wasn't. So, you're expecting an argument on that side. But on the other side, hand it won't have object after it. You won't say the discussion was completed the goat. Um, that's not a good sentence, right? So, you won't have ah, um, an object after it. So, there's sort of information of that sort, and we want to have our dependency parsers be able to make use of that structure. [NOISE] Okay. Um, so effectively what we do when we build a dependency parser is going to say, for each word is- is going to be the dependent of some other word or the root. So, this give here is actually the head of the sentence. So, it's a dependent of root, the talk is a dependent of give, 'll is a dependent of talk. And so, for each word we want to choose what is the dependent of and we want to do it in such a way that the dependencies form a tree. So that means it would be a bad idea if we made a cycle. So, if we sort of said, Bootstrapping, um, was a dependent of, um, talk, um, but then we had things sort of move around. So,this goes to here, but then talk is a dependent that, and so I'm gonna cycle that's bad news, we don't want cycles, we want a tree. And there's one final issue, um, which is we don't want things that, um, is whether we want to allow dependencies to cross or not, um, and this is an example of this. So, most of the time, um, dependencies don't cross each other. Uh, but sometimes they do, and this example here is actually an instance for that. So, I'll give a talk tomorrow, um, on bootstrapping. So, we're giving a talk that's the object, and when it's being given is tomorrow, but this talk has a modifier that's on bootstrapping. So, we actually have another dependency here that crosses, um, that dependency. And that's sort of rare, that doesn't happen a ton in English, but it happens sometimes in some structures like that. And so, this is the question of whether, um, what we say is that the positive sentence is projective if there no crossing dependencies and it's non-projective if there are crossing dependencies, and most of the time, English's projective and it's parses of sentences, but occasionally not. And when it's not is when you kind of have these constituents that are delayed to the end of the sentence, right? You could've said, I'll give a talk on bootstrapping tomorrow, and then a [inaudible] have a projective parse, but if you want to, you can kind of delay that extra modifier and say I'll give a talk tomorrow on bootstrapping and then the parse becomes non-projective. Um, okay. So, that's that. Um, there are various ways of, um, doing dependency parsing, but basically what I am gonna tell you about today is this one called transition-based or deterministic dependency parsing, and this is, um, the one that's just been enormously influential in practical deployments of parsing. So, when Google goes off and parses every web page, what they're using is a transition based parser. Um, and so, this was a notion of parsing that, um, was mainly popularized by this guy, walk him Joakim Nivre, he is a Swedish computational linguists. Um, and what you do it's- it's sort of inspired by shift-reduce parsing. So, probably in- in our CS103 or compilers class or something, you saw a little bit of shift-reduce parsing. And this is sort of like a shift-reduce parser, apart from when we reduce, we build dependencies instead of constituent. Um, and this has a lot of very technical description that doesn't help you at all to look at in terms of understanding what, um, a shift-reduce parser does. And here's a formal description of a transition-based shift-reduce parser and which also doesn't help you at all. Um, so, instead we kinda look at this example, uh, [LAUGHTER] because that will hopefully help you. So, what I wanna to do is parse the sentence "I ate fish". And yet formally what I have is I have a why I start, there are three actions I can take and I have a finished condition for formal parse, parse. Um, and so here's what I do. So, I have a stack which is on this side and I have a buffer. Um, so, the stack is what I have built, and the buffer is all the words in the sentence I haven't dealt with yet. So, I stop the parse, and that's the sort of instruction here, by putting route, my root for my whole sentence onto my stack, and my buffer is the whole sentence, and I haven't found any dependencies yet. Okay, and so then, the actions I can take is to shift things onto the stack or to do the equivalent of a Reduce where I build dependencies. So, starting off, um, I can't build a dependency because I only have root on the stack, so the only thing I can do is shift, so I can shift I onto the stack. Um, now, I could at this point say, let's build a dependency, I is a dependent of root, but that would be the wrong analysis, because really the head of this sentence is I ate. So, I'm a clever boy and I shift again. And now I have root I ate on the stack. Okay, and so, at this point, I'm in a position where, hey, what I'm gonna do is reductions that build structure, because look, I have I ate here and I want to be able to say that I is the subject of dependency of ate, and I will do that by, um, by doing a reduction. And so, what I'm gonna do is the left-arc reduction, which says, look, I'm gonna treat the second from top thing on the stack as a dependent of the thing that's on top of the stack. And so, I do that, and so, when I do that, I create the second from the head thing as a subject dependent of ate, and I leave the head on the stack ate, but I sort of add this dependencies as other dependencies I've built. Okay, um, so, I do that. Um, now, I could immediately reduce again and say ate is a dependent of root, but my sentence's actually I ate fish. So, what I want to do is say, "Oh, if it's still fish on the buffer," so what I should first do is shift again, have root ate fish in my sentence, and then I'll be able to say, Look, I want to now build, um, the thing on the top of this stack as a right dependent of the thing that's second from top of the stack, and so that's referred to as a Right-Arc move, and so, I say Right Arc, and so, I do a reduction where I've generated a new dependency and I take the two things that are on top of the stack and say, um, fish is a dependent of ate, and so therefore, I just keep the head. I always just keep the hit on the stack and the- and I generate this new Arc. And so, at this point, I'm in the same position I want to say that this ate is a right dependent of my route, and so, I'm again going to do Right Arc, um, and make this extra dependency here. Okay. So, then my finished condition of having successfully parsed the sentence is my buffer is empty and I just have root left on my stack because that's what I sort of said back here, that was, buffer is empty as my finished condition. Okay. So, I've parsed the sentence. So that worked well but, you know, I actually had different choices of when to pa- when to shift and when to reduce. And I just miraculously made the right choice at each point. And well, one thing you could do at this point is say, well, you could have explored every choice and, um, seen what happened and gone different parsers. And I could have, but if that's what I'd done, I would've explored this exponential size tree of different possible parsers. And if that was what I was doing, I wouldn't be able to parse efficiently. And indeed that's not what people did in the 60s, 70s and 80s. Uh, clever people in the 60s said, uh, rather than doing a crummy search here, we can come up with clever dynamic programming algorithms and you can relatively efficiently explore the space of all possible parsers. Uh, and that was sort of the mainstay of parsing in those decades. But when Joakim Nivre came along, he said "Yeah, that's true, um, but hey, I've got a clever idea, uh, because now it's the 2000s and I know machine learning." Um, so, what I could do instead, is say I'm at a particular position in the parse and I'm gonna build a machine learning classifier and that machine learning classifier is gonna tell me the next thing to do. It's gonna tell me whether to shift, um, with left arc or right arc. So, if we're only just so talking about, well, how to build the arrows, they're just three actions, shift, left arc or right arc. Um, if we also wanted to put labels on the dependencies, and we have our different labels, um, there are then sort of 2R plus actions because she is sort of left arc subject or left arc object or something like that. But anyway, there's a set of actions and so you gonna build a classifier with machine learning somehow which will predict the right action and Joakim Nivre showed the sort of slightly surprising fact that actually you could predict the correct action to take with high accuracy. So, um, in the simplest version of this, um, there's absolutely no search. You just run a classifier at each step and it says "What you should do next is shift" and you shift, and then it says "What you should do is left arc" and you left arc and you run that through and he proved, no, he showed empirically, that even doing that, you could parse sentences with high accuracy. Now if you wanna do some searching around, you can do a bit better, but it's not necessary. Um, and we're not gonna do it for our, um, assignment. But so if you're doing this just sort of run classify, predict action, run classify, predict action, we then get this wonderful result which you're meant to explain a bit honest on your assignment 3, is that what we've built is a linear time parser. Right? That because we are gonna be sort of- as we chug through a sentence, where we're only doing a linear amount of work for each word and that was sort of an enormous breakthrough. Because although people in the 60s hadn't come up with these dynamic programming algorithms, dynamic programming algorithms for sentences were always cubic or worse. And that's not very good if you want to parse the whole web, whereas if you have something that's linear time, that's really getting you places. Okay. So this is the conventional way in which this was done. Was, you know, we have a stack, we might have already built some structure if we hadn't working out something's dependent of something. We have a buffer of words that we don't deal with and we want to predict the next action. So the conventional way to do this is to say well, we want to have features. And well, the kind of features you wanted was so the usually some kind of conjunction or multiple things so that if the top word of the stack is good, um, and something else is true, right, that the second top word of the stack it has, and it's part of speech is verb, then maybe that's an indicator of do some action. So ha- had these very complex binary indicator features and you'd build- you literally have millions of these binary indicator features and you'd feed them into some big logistic regression or support vector machine or something like that and you would build parses. And these parses worked pretty well. Um, but you sort of had these sort of very complex hand engineered binary features. Um, so in the last bit of lecture I want to show you what people have done in the, um, neural dependency parsing world. But before I do that, let me just explain how you, um, how you evaluate, um, dependency parses. And that's actually very simple, right? So, what you do is well, you assume because the human wrote it down, that there is a correct dependency parse for a sentence. She saw the video lecture like this. And so these are the correct arcs and to evaluate our dependency parser, we're simply gonna say, uh, which arcs are correct. So, there are the gold arcs, so there's a gold arc, um, from two to one, She saw subject, and there's a gold arc from zero to two, the root of the sentence, these the gold arcs. Um, if we generate a parse, we're gonna propose some arcs as to what is the head of each word. And we're simply going to count up how many of them are correct, treating each arc individually. And there are two ways we can do that. We can either, as we're going to do, ignore the labels and that's then, uh, referred to as the unlabeled attachment score. So here in my example, my dependency paths, I've got most of the arcs right but it got this one wrong. So I say my unlabeled attachment score is 80 percent or we can also look at the labels and then my parser wasn't very good at getting the labels rights, so I'm only getting 40 percent. And so we can just count up the number of dependencies and how many we get correct. And that's in our accuracy and in the assignment, you're meant to build a dependency parser with a certain accuracy. I forget the number now is saying, some number 80 something or something that you're meant to get to. Okay. Um, maybe I'll skip that. Okay. Um, so, now I wanted to sort of explain to you just a bit about neural dependency parses and why they are motivated. So I'd mentioned to you already that the conventional model, uh, had these sort of indicated features of, um, on the top of the stack is the word good and the second thing on the stack is the verb has or on the top of the stack is some other word and the second top is of some part of speech. And that part of speech has already been joined with the dependency of another part of speech. People hand-engineer these features. And the problems with that, was these features were very sparse. Each of these features matches very few things. Um, they match some configurations but not others so the features tend to be incomplete. Um, and there are a lot of them, they're are commonly millions of features. And so it turned out that actually computing these features was just expensive so that you had some configuration on your stack and the buffer and then you wanted to know which of these features were active for that stack and buffer configuration. And so you had to compute features format. And it turned out that conventional dependency parsers spent most of their time computing features, then went into the machine learning model rather than doing the sort of shifting and, which you're are seeing, are just a pure parser operation. And so that seemed like it left open the possibility that, well, what if we could get rid of all of this stuff and we could run a neural network directly on the stack and buffer configuration, then maybe that would allow us to build a dependency parser which was faster and suffer less from issues of sparseness than the conventional dependency parser. And so that was a project that Dan Chi Chen and me tried to do in 2014, uh, we used to build a neural dependency parser. And, you know, effectively what we found, is that that's exactly what you could do. So, here's sort of a few stats here. So these are these same UAS and LAS. Uh, so MaltParser was Joakim Nivre's Parser that I sort of, uh, we started showing before. And they've got, um, a UAS on this data of 89.8. But everybody loved that. And the reason they loved it is it could parse at 469 sentences a second. There had been other people that have worked out different more complex ways of doing parsing with so-called graph-based dependency parsers. So this is another famous dependency parser from the 90s. So it was actually, you know, a bit more accurate but it was a bit more accurate at the cost of being two orders of magnitude slower. And, you know, people have worked on top of that. So, here is an even more complex graph-based parser, uh, from the 2000s and well, you know, it's a little bit more accurate again but it's gotten even slower. Um, okay. So, what we were able to show is that using the idea of instead using a neural network to make the decisions of Joakim Nivre Style shift-reduce parser, we could produce something that was almost as accurate as the very best parsers available at that time. I mean, strictly we won over here and we are a fraction behind on UAS. Um, but, you know, it was not only just as fast as Nivre's parser, it was actually faster than Nivre's parser, because we didn't have to spend as much time on feature computation. And that's actually almost a surprising result, right? It's not that we didn't have to do anything. We had to do matrix multiplies in our neural network, but it turned out, um, you could do the matrix multiplies more quickly than the feature computation that he was doing even though at the end of the day, it was sort of looking at weights that went into a support vector machine. So that was kind of cool. And so the secret was we're gonna make use of distributed representations like we've already seen for words. So for each word, we're going to represent it as a word embedding, like we've all what already seen. And in particular, um, we are gonna make use of word vectors and use them as the represent- the starting representations of words in our Parser. But well, if we're interested in distributed representations, it seem to us like maybe you should only have distributed representations of words. Um, maybe it also be good temp distributed representations of other things. So we had parts of speech like, you know, nouns and verbs and adjectives and so on. Well some of those parts of speech have more to do with each other than others. I mean, [NOISE] in particular, um, most NLP work uses fine-grained parts of speech. So you don't only have a part of speech like noun or verb, you have parts of speech like singular noun versus plural noun and you have different parts of speech for, you know, work, works, working, kind of the different forms of verbs are given different parts of speech, um, as well. So there's sort of sets of parts of speech labels that kind of clusters. So maybe we could have distributed representations, a part of speech that represent their similarity. Why not? Um, well if we're gonna do that, why not just keep on going and say the dependency labels. They also, um, have a distributed representation. And so, we built a representation that did that. So the idea is that we have in our stack, the sort of the top positions of the stack, the first positions of the buffer and for each of those positions, we have a word and a part of speech and if we've already built structure as here, we kind of know about a dependency that's already been built. And so we've got a triple for each position and we're gonna convert all of those into a distributed representation, um, which we are learning and we're gonna use those distributed representations, um, to build our parser. Okay. Now for- so, you know starting from- starting from the next lecture forward, we're gonna sort of s- start using a more complex forms of neural models. But for this model, um, we did it in a sort of a very simple straightforward way. We said, well, we could just use exactly the same model, exactly the same parser structure that Nivre used, right? Doing those shifts and left arcs and right arcs. Um, the only part we're gonna turn into a neural network is we're gonna have the decision of what to do next, um, being controlled by our neural network. So our neural network is just a very simple classifier of the kind that we are talking about last week. So based on the configuration, we create an input layer which means we're sort of taking the stuff in these boxers and turn- and looking up a vector representation for each one and concatenating them together to produce a input representation that's sort of similar to when we were making those window classifiers and then we can concatenate a bunch of stuff together. So that gives us in our input layer. [NOISE] Um, so from there, we put things through a hidden layer just like last week. We do Wx plus b and then put it through a ReLU or a non-linearity to a hidden layer. And then on top of that, we're simply gonna stick a softmax output layer. So multiplying by another matrix, adding another, um, bias term, and then that goes into the softmax which is gonna give a probability over our actions as to whether it's shift left arc or right arc, or the corresponding one with labels. And then we're gonna use the same kind of cross entropy loss to say how good a job did we do at guessing the action that we should have taken according to the tree bank parse of the sentence. And so each step of the shift-reduce parser, we're making a decision as what to do next and we're doing it by this classifier and we're getting a loss to the extent that we don't give probability one to the right action. Um, and so that's what we did using the tree bank. We trained up our parser, um, and it was then able to predict the sentences. And the cool thing- the cool thing was, um, that this, um, had all the good things of Nivre's parser but, you know, by having it use these dense representations, it meant that we could get greater accuracy and speed than Nivre's parser at the same time. So here is sort of some results on that. I mean, I already showed you some earlier results, right? So this was showing, um, the fact, um, that, you know, we're outperforming these earlier parsers basically. But subsequent to us doing this work, um, people at Google, um, these papers here by Weiss and Andor, um, they said, "Well, this is pretty cool. Um, maybe we can get the numbers even better if we make our neural network, um, bigger and deeper and we spend a lot more time tuning our hyper-parameters." Um, sad but true. All of these things help when you're building neural networks and when you're doing your final project. Sometimes the answer to making the results better is to make it bigger, deeper and spend more time choosing the hyper-parameters. Um, they put in Beam search as I sort of mentioned. Um, Beam search can really help. So in Beam search, you know, rather than just saying, "Let's work out what's the best next action, do that one and repeat over", you allow yourself to do a little bit of search. You sort of say, "Well, let's consider two actions and explore what happens." Um, quick question. Do humans always agree on how to build this trees and if they don't, what will be the [inaudible] or agreement of humans relative to [inaudible] [OVERLAPPING] [NOISE] So that's a good question which I haven't addressed. Um, humans don't always agree. There are sort of two reasons they can't agree fundamentally. One is that, uh, humans, um, sort of mess up, right? Because human work is doing this aren't perfect. And the other one is they generally think that there should be different structures. Um, so, you know, it depend- varies depending on the circumstances and so on. If you just get humans to parse sentences and say, "Well, what is the agreement and what they produced?" You know, maybe you're only getting something like 92 percent. But, you know, if you then do an adjudication phase and you say, "Um, look at these differences, um, is one of them right or wrong?" There are a lot of them where, you know, one of the person is effectively saying, "Oh yeah, I goofed. Um, wasn't paying attention or whatever." Um, and so then, what's the residual rate in which, um, people can actually disagree about possible parses? I think that's sort of more around three percent. Um, yeah. But there certainly are cases and that includes some of the prepositional phrase attachment ambiguities. Sometimes there are multiple attachments that sort of same clause although it's not really clear which one is right even though there are lots of other circumstances where one of them is very clearly wrong. Um, yeah. [inaudible]. There's- there's still room to do better. I mean, at the unlabeled attachment score, it's actually starting to get pretty good. But there's still room to do better. Um, yeah. Um, yeah. So Beam search, the final thing that they did was- that we're not gonna talk about here, is the sort of more global inference to make sure, um, it's sensible. Um, and so, um, that then led to Google developing these models that they gave silly names to, especially the Parsey McPa- parseFace, um, model of parsing. Um, and so, yeah. So that then- that's sort of pushed up the numbers even further so that they were sort of getting close to 95 percent unlabeled accuracy score from these models. And actually, this work has kind of, you know, deep learning people like to optimize. Um, this work [LAUGHTER] has continued along in the intervening two years and the numbers are sort of getting, um, a bit higher again. But, you know, so this actually, um, led to ah sort of a new era of sort of better parsers because so effectively this was the 90's- the 90's era of parsers that was sort of where around 90 percent and then going into this sort of new generation of, um, neural transition based dependency parsers. We sort of have gone down that we've halve that error- error rate. And we're now down to sort of about a five percent error rate. Yeah. I'm basically out of time now but, you know, there is further work including, you know, at Stanford. Um, another student, Tim Dossad has some sort of more recent work. It's more accurate than 95 percent, right? So we- we're still going on but I think I'd better stop here today, um, and that's neural dependency parsing. [NOISE].
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_2_Word_Vectors_and_Word_Senses.txt
Okay. Hello everyone. Um, welcome back to the second class of, um, CS224N. Okay, so right at the end of last time I was just showing you a little, um, from this, um, IPython Notebook of things that you could do with word vectors but I kind of ran out of time a little for a bit. So, I'll just spend a couple of more minutes first, um, showing the end of this. I stuck this IPython Notebook up on the course page. So, under lecture one you can find a copy of it and you can download it. So, I both stuck up just an HTML version of it and a zip file. Like HTML file is only good to look at. You can't do anything with it. So, you wanna, if you wanna play with it by yourself, um, download the zip file and get the IPython Notebook out of that. Okay. So we were looking at these Glove word vectors which I'll talk about a bit more today and so there were these sort of basic results of similarity in this vector space work very nicely for, um, discovering similar words and then going on from that, there was this idea that we'll spend some more time on today which was, um, maybe this vector space is not only a similarity space where close together things have similar meaning but it actually captures meaning in a considerably deeper and more profound way which is to say that there are actually directions in the space that you can point which have a certain meaning. So, that if you are pointing in one direction it means this is more so the case, if you are pointing in a different direction and the meaning space it might be this is the capital of this country or all sorts of different meanings could be encoded in the space. And a way of testing that, is to use these analogy, um, problems. And I quickly showed this at the end but just to make sure if you're unguarded since it's sort of- it's sort of a clever thing right? So, the idea is that we're going to start with a pair of words like king and man. And so what we're gonna do is we're gonna say well, there's a vector for king in the space and there's a vector for man in the space and but what we're gonna do is we're going to subtract as in just good old vector subtraction that you hopefully learned in your, um, linear algebra class. We're gonna subtract the man vector from the king vector and the idea we have in our head then is if we do that what will happen is we'll be left with the meaning of kingship without the manness. Um, and so then there's also a direct vector for a woman. So, we can add the woman vector to that resulting vector and then we could say well, in the vector, we end up at some point in the vector space and then we're gonna say well, what's the closest word that you're gonna find the here and it's gonna print out the closest word and as we saw, um, last time, um, lo and behold if you do that, um, you get the answer. I'm saying you get, um, king, man, woman. No? All right. [LAUGHTER]. You gotta reverse king and man. I have to reverse king and, ah, sure, sure, sure. I'm sorry. Oops. Yeah, okay, I kinda do it well like man, king. Ah, [LAUGHTER] Okay. Yeah, that's right. Sorry. Okay. Yeah, because it should be man is to king as woman is to something sorry yeah. I was getting [LAUGHTER] my order of components wrong. Okay. Um, and, you know, as I was sort of I guess I was showing some examples last time with nationality words but I mean this in a way that is sort of surprising to shocking, this actually works for all kinds of things that you can get meaning in this space. So, I can ask various kinds of analogies of sorts. So I can say Australia is to beer as France is to-. Wine. Wine. You might think wine. What it gives back as champagne which seems a pretty good answer. [LAUGHTER] Um, I'll go with that. Um, um, you can do more syntactic facts. So, I can say tall ta- tall is to tallest as long is to longest and it gets set. Um, if I say good is to fantastic as bad is to terrible. That it seems to get out that there's some kind of notion of make more extreme direction and get this direction out. I skipped over one. A bomber is to Clinton as Reagan is to. You may or may not like the answer it gives for this one as Obama is to- as Reagan is to Nixon. Um, now one thing you might notice at this point and this is something I actually want to come back to at the end. Um, well, there's this problem because Clinton's ambiguous, right? There's Bill or there's Hillary. Um, and, um, I forget, you know, so this data as I said is a few years old. So, this data was done in 2014. So, in sort of in- it definitely doesn't have Trump really in it as a politician, um, but, you know, it would have variously both Clintons but as sort of makes sense if probably um, for a sort of proof for 2014 data, um, that Bill Clinton dominated. So, I think what we're getting, um, out of this is that Clinton and Nixon are sort of similar of people in dangers, um, of being impeached. Um, and, uh, on both sides of the aisle had us thinking primarily of Bill Clinton. But, um, if this sort of brings up something that I'll come back to right at the end of, um, it sort of looks like we've got a sort of a problem here because we just have this string literally Clinton and that, um, string is any possible sense and meaning of the string Clinton and so minimally um, that we have Bill Clinton and Hillary Clinton that near. Maybe you have some friends that are called Clinton as well, right, and they're all mixed together in this Clinton. And so that seems kinda problematic and that's sort of been an issue that's been discussed some for these word vectors and I'll come back to that. Um, another thing you can do is you can give a set of words and say which is the odd one out. Maybe you used to do puzzles like that in middle school or something. Um, and so you can do that and it decides that cereal is the odd one out of that set. It seems okay. Um, and then one other thing I'll just show you is, so, um, it'll sort of be nice to look at these words that I've drawn them in some of the slide pictures. So, this is saying to put together a PCA or Principal Components Analysis, um, scatter plot. Um, so, I can do that and then I can say, "Um, give it a set of words and draw me these as a scatter plot" and um, hopefully if I can just about fit it in, um, here's my scatter plot. And it works pretty well, right? I've got the wine, champagne, beer up here then the coffee and tea. Um, here are the countries. Here is the schools, college institute, universities. Um, the animals are down here. Um, foodstuffs there. So, yeah, this sort of really does work with this two direction- dimensional display. It basically shows you similarity. Now, um, there are, you know, to some extent though you want to hold on to your wallet with these PCA displays. So, it's as I've discussed before since you're taking something that was 100-dimensional and we're just doing this 2D projection that is capturing some of the major geometry of the space but it just has to be losing a huge amount of the information. So, when things end up close together, they might be really close together in the original space or they might just have been words that lost in the 2D projection because they- there are other patterns that were more dominant and were chosen as the first two principal components. So, you sort of don't wanna over trust these things and something if you like Infoviz you might think about is how there are other ways that I might be able to represent the distances in a way that was more accurate. Um, but anyway this is very simple to do and I'm just getting a PCA to reduce the dimensionality of the matrix and then, um, transforming with it these word vectors and printing them. Um, it's mainly easy to do. The bit that wasn't easy for me to do, um, but if someone's got some clever Python um plotting tips I'd like one, if someone wants to send me a message after class. I would have thought there'd be some default way in which you could just label points in a scatter plot but I wasn't able to find one. So, what I did, um, was I'm just sort of plotting the texts and I'm offsetting it a little bit from the points. Um, now that works kinda crappily because they just collide with each other as you can see. Um, so, it'd be better if there was a better way to do point labeling in Python plots. So, if anyone knows the answer to that one you can send it to me. Um, okay. So, that's that. Ah. And if you haven't used IPython Notebooks before and don't want your computer to run really slowly, it's a good idea to halt your IPython Notebooks when you're not gonna be using them anymore, um, especially if they're computing something. Um, okay. [NOISE] Um. [NOISE] Okay. [NOISE] So now, [NOISE] um, lecture two and so for today, we're gonna keep on talking about things you can do with Word Vectors and say a little bit at the end about Word sensors. So, in more detail, [NOISE] um, I'm gonna say a bit more about, um, Word2Vec. I'm gonna have a sort of a very brief excursion on optimization, um, but then I sort of want to explain a bit more of the space of what people have done and can do with dense word representations. So I am gonna say something about count-based approaches to capturing meaning and how do they work. I'm gonna talk for a bit about a, a different model of Word Vectors which was the GloVe model that, um, as a post-doc of mine, um, Jeffrey Pennington and, uh, me worked on a couple of years ago, um, talk some about evaluation, really quite dominant theme on a lot of what we do on natural language processing is how do we, how do we evaluate things and how much do we trust our evaluations, um, and then say a little bit about, um, word sensors. I have a sort of a goal here which is that by the end of the class, um, you should actually sort of understand, um, enough of the lay of the land that you could read papers about word vectors such as the ones that are in the syllabus and actually understand them and where they're coming from and roughly how they work. And so, you know, if you really wanna minimize work for your c- this class, you could think, "I, I know everything I need to know after the first week and I'm gonna do a final project on word vectors and I'll be okay." Um, and you know, you could actually do that, I mentioned during the wo- um, class, um, a couple of recent pieces of work on word vectors. On the other hand, um, doing things with word vectors as a fairly mined out areas, so you're probably better off, um, also listening to some of the later parts of the class. Okay. So, remember we had this idea of Word2Vec, so it was an iterative updating algorithm that learned, um, these vector representations of words, then in some sense capture their meaning and the way it worked was we kinda moved position by position through a corpus and each point in time, we had a center word here into and it's trying to predict the words around that by having a probability distribution over words will occur around that, and that probability distribution is defined simply in terms of the.product of the word vectors via the Softmax function. And so, what we wanna do is change those vectors in a way that this gives good probability predictions, that gives as high probability as possible to words that you tend to see in the context. And so, just to drill that in a little bit more, you know, what we actually have is we have two matrices, right? We have for center words, we have a matrix where for each word in our vocabulary, we have a vector, um, and at this, this is probably as good a point as any to say that it turns out that all the major deep learning packages, TensorFlow, PyTorch, etc., for their word vectors, the word vectors are represented as rows. If you've done a bunch of math classes, that might not be what you would expect. You might have expected the other way around, but they all put them in rows. So we can have rows for our, um, so we have six words and a five dimensional vector each. Okay. And then, we have this outside, um, matrix where we also have a second, um, vector for each word which is this representation in context. Um, so when we have a particular center word here, word four, you know, when we're doing our computations, we're taking a.product between v_4 and each row of U and that's then giving us a vector of dot product scores. And so, then after that, we're running Softmaxes on each of those numbers doing it element-wise and that's been giving us a probability distribution over words in the context. Um, and the sort of things to notice there, um, which hopefully you noticed last time, but to make sure you noticed that, um, you know, we've just got one probability distribution, right? So in terms of what words we predict, we're predicting exactly the same probability distribution, every position. We've sort of saying the most likely word one to the left is whatever it is house or most likely word to the left is house, three to the left is house, the one to the right should be house too, right? So, it's sort of no sort of find us a prediction, it's just an overall kind of probability distribution of words that are likely to occur in my context. So, all we're asking for is a model that gives reasonably high probability estimates to all words that occur in the context of this word relatively often, is nothing more to it than that. And that's part of why it's sort of surprising when you've got such a simplistic thing that it seems like at the end of the day, it can end up capturing so much about the meanings of words and aspects of the meanings of words, like in the examples I've just showing you in the IPython Notebook. Um, and [NOISE] there's one other thing I was gonna say, oh yeah, one other thing I was gonna say was the other thing that might occur to you from this is, um, well, wait a minute, there was like that and-and, and-of that occur all the time. Um, so that means every word must have a high dot product with words like that and of and, um, they get their probabilities right. And the first answer to that is, "Yup, that's true." And it turns out that all word vectors, [NOISE] um, have a very strong prob- word probability component that reflects that. And I mean, one of the things that some workers discuss, so on the readings, there are two papers from Sanjeev Arora's group in Princeton and one of those papers sort of discusses, um, this probability, high frequency effect and your crude way of [NOISE] actually fixing this high frequency effect is that normally, um, the first, um, the first biggest component in your word vectors is actually a frequency effect and if you just lop it off, you can make your semantic similarities better. Um, but there are other things that we do to sort of deal with high frequencies. Okay, so we get these lovely spaces that I've shown some of. But I'll make one more remark. Um. Yeah, so did I say this last time? Oh, oh. Um, my remark anyway is that, um, we show all these two-dimensional pictures. They're exceedingly, exceedingly misleading because in these pic, two-dimensional pictures, you know, you have these effects that if, you know, Samsung is close to Nokia, it has to be over here and then it has to be far away from words that are over here. Um, whereas you might sort of also want to have the effect that Nokia is close to Finland for a different reason, um, and you can't do that in two-dimensional, um, vector spaces but, you know, one of the, um, most of the properties of high dimensional vector spaces are very unintuitive, and one of the ways that they're unintuitive is in a high dimensional vector space, a word can be close to lots of other words in different directions. Um, okay. So um, we sort of started to talk about how we went about learning these word vectors. I'm sort of going to take about a five minute detour into optimization. Now, this isn't really an optimization class, if you want to learn a lot about optimization. Well you can learn more about optimization if you do 229 and if you do something like Stephen Boyd's optimization class, you can learn a lot of optimization but this is sort of really baby optimization but just to make sure everyone's on the same page, here are three slides. Right, so what we did at the end, what we did over there, where I apologized that my writing was too small, but that will give you the chance to when doing homework too and you have to write that out to work it out for yourselves and learn more in the process. Right, so what we had was a cost function that we wanted to minimize and so what we did was we did our bit of calculus to calculate the gradient of the cost function with respect to our word vectors which were our variables theta and then what we want to do is say, well if we take a small step in the direction of the negative of the gradient that will be taking us down, down hill in this space and we want to keep on doing that and sort of head to the minimum of our space. I mean, of course in our high multi-dimensional space, you know, it might not be a nice smooth curve like this. It might be a horrible and non-convex curve but that's just the idea. So, essentially we're saying we've got the old parameters, we work out the gradient of the objective function using those old parameters. We multiply that by a small alpha which is our step size or learning rate because we only want to move a little bit each time because if back here, if we sort of said downhill is this way and said, "Great let's go a long way that way." You could kind of completely overshoot, so we only want to go a little bit each time. So we normally have a small learning rate alpha and so we subtract a small multiple of the gradient and we, from the old parameters and we get our new parameters and that sort of effectively being worked out, component wise as is shown below, that we're just doing that to each of the partial derivatives and then, that our hope is that that will let us gradually walk down this surface. Now, if you actually did this, it would be unbelievably bad for the kind of systems that we build and there's a lot of work on clever optimization but the most basic thing which you definitely need to know is that well, our objective function here, J of theta was a function of our entire corpus, right? And to get this to work well, the first thing you want to do is, you know collect a few billion words of your favorite language and then say, "Go and build a Word2Vec model for me, " and so, if you have to evaluate a billion center words and maybe then to- for each of 10 billion context words, if you have a window size of five and you- so you have to do these sort of 10 billion um, Softmax calculations before you work out what your gradient is, that you're going to be having your computer compute for a quite a long time before you make one little step in the gradient and so things are going to go so, so slowly. So, no one does that in deep learning systems. Um, so what people- everyone does is use stochastic gradient descent and in stochastic gradient descent, we sample our window in the simplest case. We, just for this one window, work out an estimate of the gradient and we use it as a parameter update. So, this is sort of an amazingly, amazingly noisy estimate of the gradient but it sort of doesn't matter too much because as soon as we've done it, we're going to choose a different center word and do it again and again, so that gradually we sort of approach what we would have gotten if we'd sort of looked at all of the center words before we took any steps, but because we take steps as we go, we get to the minimum of the function orders and magnitude more quickly. So thi- this shows the simplest case where we're just sampling one window. In practice, that's not what we normally do. We normally sample as- a small bunch, you know, order of approximately 32 or 64. Um, so if we have a sample that's bigger, that's generally referred to as a mini-batch and we calculate a gradient estimate from the mini-batch. Um, so that has two advantages. One advantage is that you kind of get less noisy estimates of the gradient because you've kind of averaged over a bunch of examples rather than just using one, but the second advantage, which is the one why we really care, is if we want our computations to go fast when we're using a GPU, that you need to get parallelization of doing the same operation a whole bunch of times and then you gain a lot by using a mini-batch of 64 examples or something like that. Um, and you don't have to but you know, it turns out the details of the guts of the hardware that you know, it isn't- [inaudible] GPUs, you know, they have these, whatever they have inside them, there in powers of two. So, you get better speedups if you use batches like 32 or 64, rather than just deciding that 42 is still your favorite number from high school [LAUGHTER] and you're going to use that as the size of your mini-batch. Okay. um, yeah here's one other interesting thing which actually has some optimization details in it, it turns out. Um, if you think of these um, doing stochastic gradients with word vectors, that's actually very different to some other deep learning problems like vision deep learning problems. Because for either a single window or even a sort of a reasonably sized mini-batch, it will turn out that those mini-batch, mini-batch only has, you know, relatively speaking a handful of words in it, right? So, if you have mini-batch of size 32 and a window size of ten, you know, probably there are only about a 100,150 different words in it. Um, but yet we're building this model over a vocabulary of quarter of a million words or something like that. So, just about all of the elements in this vector are zero. Um, and so, um, we sort of really have this very sparse um, perimeter update and so, um, that sort of suggests that we actually probably um, want to sort of only update the word vectors that appear and then the question is whether you can achieve that, right? The dumb way to do it, is you just have this matrix that's normally, nearly all zeros and you say add those two matrices together and there you go and then the question is, can you actually have a sparse matrix update which only updates the certain rows of the matrix that contain the words that you've entered and do things much faster? And if you're doing something even cleverer like doing distributed computation over multiple computers and sharing your parameters, well then definitely you just sort of only want to update the word vectors that you've actually been getting a parameter estimate for. So, there's sort of some details there but I'm going to skip past them for more details, um. Right. So, a couple of people asked afterwards, yeah, why are there these two word vectors that sort of center and the outside one? And, I mean the answer to that is, it makes that math I showed you easy, right? So that if, um, if you do it as I showed you, well, you know, for working out, um, the partial derivatives for the center word. It's just as I showed you, it's easy. Um, but if you use only one set of word vectors, well then the same word, that's the center word, will be one of the choices for the context word when you're working out that Softmax for the context word. And then you'll get these terms that are then squared terms in terms of the two references, so that same word, and that makes your math more difficult. Um, so it's sort of just a practical thing, um, in the end. I mean it sort of doesn't make very much difference, because if you sort of think about it since you're going along through all the, um, positions, you know. What was a center word at one point is immediately afterwards a context word of what used to be a context word, which is now the center word. So, sort of doing the same computations because, you know, the dot product is symmetric actually, um, all over again. So, they get pretty similar vector representations. So, it seems like in general you can get the best results by averaging what comes out for your two vectors, and you end up with just one vector per word. Okay, more substantively, um, if you go to the word2vec paper, you'll discover that there's sort of more to word2vec that they define as sort of a family of word2vec models. And there are so two main parts of that family. Um, firstly, there's a choice between the Continuous Bag of Words model, and the skip-grams model. And what I presented with the skip-grams models. So, in the skip-grams model, you've got one center word and you're trying to predict all the words in context one at a time. For the Continuous Bag of Words model it's the opposite. You've got all of the outside words and you're trying to use all of them, though considered independently like a Naive Bayes model to predict the center word. Um, and then the second one is, um, the way I presented learning this was the method that's using the so called Naive Softmax. So, therefore when we are wanting to work things out, we were sort of saying okay we want probability estimates for the context words, and so we're just going to sum over the whole vocabulary and we'll come up with these probability estimates. Um, in practice, that turns out to be a sort of a bad idea because that would also make things mega slow. So, in homework two, coming up next week, um, you will get to implement a much more practical, um, way of doing this which they present in the word2vec papers, right? So, the problem is, if we're using this equation that we use to do the calculus, that down in this denominator here, we're doing the sum over the entire vocabulary. So, if you have a vocabulary of a quarter million words, we're sort of doing a quarter of a million dot products and exponentials and adding them all to work out that denominator. And that sort of seems uh, sort of a really bad idea if you want things to be fast. Um, so, um, Tomas Mikolov and colleagues came up with this idea of negative sampling would be near enough. And so the idea of negative sampling, is we're going to train binary logistic regressions instead. And so, we're going to train one binary logistic regression for the actual word observed what's in the numerator, and you want to give high probability to the word that was actually observed. And then, what we're going to do, is we're going to sort of randomly sample a bunch of other words, they're the negative samples and say they weren't the ones that were actually seen. So, you should be trying to give them as low a probability as possible. Okay, so, um, the sort of notation that they use in the paper is sort of slightly different to the one I've used. They actually do maximization not minimization, and that's the equation which I'll come back to. Um, though before we do that here's the sigmoid function. So, the sigmoid function is normally written like this, one over one plus E to the minus X. But, um, essentially, the sigmoid function is like a binary case of the Softmax function, right? That we have two possible outcomes, yes or no, and that you're sort of again got an input that is any real number, and it's mapping it onto a probability distribution between zero and one which represents these two binary outcomes. And to the extent that the number is positive, it kind of ceilings to one and negative goes down to zero. Okay, so with this time, we're going to take the dot for- for the good word, we're going to take the dot product of the two vectors, shove it through our sigmoid function and then we're going to want that probability estimate, um, to be as high as possible. So, if I show you this version, which is just written slightly differently, um, to look as much as possible like the notation that we used last time, here is our new objective function for using negative sampling. And we've got two terms, the first one, um, is the log of the sigmoid of the observed context word, the outside words, dot producted with the center word, and we're going to want that to be big. Um, and then on the other hand, um, we've got, um, the, um, randomly chosen K words, which are just other words, and we're going to work out dot products between them and the center word. And we're going to want those to be as small as possible. Um, note that extra minus sign in there which is causing the sign of the two things to be different, right? So, those are our negative samples. And for big K, it can be a reasonably modest number, you can just take kind of 10, 15 negative samples and that works pretty fine. Um, I said we sort of sampled some words, um, to be the negative samples. They in particular propose a sampling distribution that helps them along a little in partly dealing with this pro- problem of very frequent words. Um, so the starting point of how you sample words is you use what we call the- the unigram distribution. So, that just means you take words on a large corpus and count up how often each one occurs just as a count of independent words, so there's the called unigram counts. And so you start off with unigram counts, but then you raise them to the three quarters power. And raising to the three quarters power, has the effect of, um, decreasing how often you sample very common words, and increasing how often you sample rarer words. Okay, um, and that's that. Okay, so that's everything about word2vec I am going to say. Anyone have any last thing. Yes. [NOISE] Oh, oh [NOISE]. This is a- sorry Z, that capital Z is often used as a normalization term and so this is saying, well if you want the probability distribution of words, is you work out this three quarters power of the count of the word for every word in the vocabulary and then these numbers you just sum them up over the vocabulary and it'll be sum total and we're dividing by that so we get a probability distribution. Good question because i hadn't explained that. Um, in this class, when you see the letter Z with no explanation, it normally means I am a normalization term to turn things into probabilities and you sort of iterate over the numerator term and summing them and divide through. Any other questions of things I haven't explained or otherwise? Yes. So the window [inaudible] that's a [inaudible] Yeah, yes. So, [NOISE] what size window do you use? I'll actually come back to that in a bit and show a little bit of data on that, but yeah, we haven't done anything about that. At the moment we're guessing a window size like five, which isn't a bad one um, but you know there isn't- there hasn't really been any science behind that, um, that people treat that as what's then called a hyperparameter which means that um, you try a few different numbers and see which one seems best and that's the one that you use in your future work. Yeah. Um, [inaudible] three quarters power chosen for any theoretical reason or just because it seems to work in practice? Um, no. Um, that, that was um, also chosen as a hyperparameter and improved performance. I mean, actually um, you know, for this Word2Vec paper, I mean, you know, it turns out that um, in the actual paper um, the model looks very- fairly clean but what people's discovered when they started digging through the code, which to- to their credit they did make available, reproducible research, that there are actually a whole bunch of tricks of different things like these hyperparameters of um, how you sample, and how you wait windows and various things to make the numbers better. So, you know, people play quite a few tricks to make the numbers go up which aren't particularly theoretical. Are we good? Yeah. [inaudible] [NOISE]. Ah, sometimes. I so- I- you- so in general for a lot of these sampling things, it's a bad idea if you're going to be doing multiple passes if you just go bloom, bloom, bloom and then bloom, bloom, bloom again, that's a bad idea, but a common technique a lot of the packages use is that they do use this shuffling operation at the beginning. So for each epoch, they'll shuffle the data randomly and then they'll go through it in sequence and that has the benefits of faster computation from locality et cetera um, while ha- meaning that when you do it differently epoch, it will work out differently. Uh, yeah, yeah. [inaudible] [NOISE] [inaudible]. That last question I think was talking about taking the mini-batches from the corpus and contrasting whether you actually say sample 20 randomly from the whole corpus versus just sort of working from left to right. Yes, do you have a question? Um, yeah [inaudible] [NOISE]. Yeah. So- so you could argue- you could argue whether or not this was written in the clearest way, but, right. So, we're making this dot product and then we're negating it which is then flipping which side of the space we're on, right? Because the sigmoid is symmetric around zero. So, if we've got some dot product um, and then we negate it, we're sort of working out a one minus probability and so that's the way in which we're actually for the first um, for the first time we're wanting the probability to be high and then for the negative samples, we're wanting their probability to be low. Okay, I'll maybe run ahead now. Um, so this was an algorithm which um, sort of you're going through this corpus position by position and you're sort of doing this prediction of words and then you're updating some parameters and you're learning something and you know, by job it seemed to work based on what we saw in the examples, but you know, you might have thought um, that that was kind of weird right? Look we have this whole big pile of data you know, sort of traditional, I'm thinking of statistics, right? So you have a big pile of data, you aggregate it and it sort of seems like there are obvious things you could do here. You could say, well there's a word like, whatever word we're using, banana. Let's just see what words occur in the context of the gut banana and count them all up and then we'll be able to use those to predict somehow and you know, those kinds of methods were traditionally used including even with distributed representation techniques. Um, so I want to say a bit about that, so you're fully educated and don't sound like one of those people who were aware of no work that happened before 2013 when your network's took off. Um, okay. So, what we could do is we can essentially do the same thing as sort of Word2Vec. We could say there's a five word window around each word instance that's often referred to as a word token, right? So at NLP, we often want to distinguish between a particular kind of type like banana or apple versus particular instances often in the text and that's referred to as sort of a type token distinction. So we could, um, look at each um token with a word, and the words five around that, and then we could so start counting up which words occur, occur with it and so we can then have a matrix of co-occurrence counts. Um, okay. So, we'll have again, and I'm going to give me an example of this. So, normally again you use the five to 10 but you know I can just use a window of one to keep my counts very simple and small. I ignore left or right just like Word2Vec did, and so if I have a teeny baby corpus like this, you know, what I could do, is just say here is the matrix of word co-occurrence accounts. So, within my window size of one, I occurs next to like twice, and that means that like occurs next I twice it's symmetric, and all my other accounts here are singletons, um. And so this gives me a big huge sparse matrix of word co-occurrence accounts. And so one thing that you could do is just use this matrix directly, because I haven't really got enough data here. But, you know, if you sort of, um, decided that, you know, the word like is like the word learning, what you'd do is you'd expect that these two vectors would end up kind of similar to each other. And [NOISE] they do. So, you can just measure, um, similarity of the vectors directly in terms of these co-occurrence counts. But, you know, it's a little bit unappealing doing things this way, right? If you have a quarter million word vocabulary that's where you are in this space where my math is bad, but it's in the trillions of the number of cells of this matrix, might require a lot of storage. Though if you're clever and notice that most of the cells were zero and could do some clever sparse matrix representation might take a little bit less. Um, your classification models might have sparsity issues cause, you know, a lot of those cells aren't present and so it might not be very robust. And so those are traditional answer to all of these things which is well, maybe we could have that big co-occurrence counts matrix and somehow reduce its dimensionality of just, um, find a corresponding low dimensional matrix which preserves, uh, most of the information, um, in the original matrix and, you know, maybe we'll reduce things to a dimensionality of somewhere around the size 25 to a 1,000, um as is done with Word2Vec. So, there's sort of a standard most common way of doing this dimensionality reduction and you don't really have to understand all the math, but you get to play with this and homework one which is, um, for any matrix you can do what's called the singular value decomposition, um, which is a way you can take an arbitrary matrix and decompose it into three matrices, um, where the center one is diagonal and has what- in it what are called singular vectors which are weightings of the different dimensions. So, they decrease in size as you go downwards. And then these two U and V are then orthogonal bases corresponding to the rows and columns. And so in particular, it's even simpler than the case where we just have these word-word vectors, because you have a square matrix and so they are effectively the same. But, you know, for the general case, um, although you get these sort of full orthogonal bases, you then have these bits sort of don't really matter cause they end up being used for nothing when you work out the product. Um, and then if you want to reduce the dimensionality, what you say is, throw away the smallest singular values which remember there are in decreasing size and that means you're then effectively throwing away rows and columns of these other matrices. And then it says, behold I've now reduced these things to a two-dimensional representation from the original three-dimensional representation and that's referred to as the reduced SVD and the classic result is in terms of least squares error in estimation that this- the product of these three things will give X k which is the best, um, k- rank k approximation to the original X in terms of, uh, X squared least squares criterion. So, we could do this and we could build word vectors. So, I can, um, make use of, um, NumPy's SVD function and I can throw into it, um, matrices and, um, I can make word vectors. And these ones look really bad, but hey, I give it a dataset of three centers [LAUGHTER] and it's not exactly a fair comparison. But- so this technique was in, um, popularized around, um, the turn- the turn of the millennium. It generally, um, went for some word applications under the name of latent semantic analysis or latent semantic indexing and the idea was that you could have these semantic directions that you are finding in this low dimensional space that had meaning. And people worked with it quite a bit for techniques like t- trying to do information retrieval using these LSA approximations and it sort of worked a bit. It kind of never really worked very well I think, um, and so it never sort of hugely caught on. Um, but it's- the methods kind of continued to be explored actually mainly in the sort of COG psych- COGS psych community where people were doing things with word meaning. And there's this sort of kind of interesting, um, the [NOISE] to the literature that there was this guy Doug Rohde, um, who, um, did a PhD at CMU, um, in 2005. And basically what he discovered was, look if rather than just using raw counts, I start doing quite a bit more in terms of, you know, fiddling with the counts, I can start to produce results that are much better. So, rather than using low counts, you have to do something to deal with those very high-frequency words. So, one idea is you could log scale them which is also commonly used in information retrieval. Another idea is you could just use something like, uh, a ceiling function, so you take the minimum of X,t for t set and that some number like around 100. Um, he had- he used the idea which was also another of the hacks that was put into the Word2Vec was rather than just treating the whole window the same that you should, um, count words that are closer more. So, in Word2Vec, they sample closer words more commonly than further away words. Um, in his system, you're sort of having to have a differential count for closer words et cetera. And then, um, compared to any of that rather than using counts at all, he then started using Pearson correlations which helped and set they're sometimes negative and he decided that it helped, um, if you then got rid of the negative values. So, in- in some sense, this sounds like a bag of hacks, um, but on the other hand, he was able to show that, you know, these transformed counts could actually then give you very useful word vectors as I'm about to show. And- well, we have to realize that actually in slightly different forms, several of these exact same counts are actually being used in Word2Vec as well. Do you hear that? Yeah. Were they [inaudible]. Yeah. So, so that's an- I'm about to show exactly that. Um, that's actually a really interesting little, um, bit of the data. So, you know, what, um, yeah, so the, the thing- if you do that, you not only get word similarities pretty good. Let me show you this example which is cleaner. Um, so this- the precise idea of evaluating with analogies was not something that had really been developed. So, that was actually something that Marsh Mikolov, um, suggested. But actually, um, Doug Rohde made this, um, really interesting observation which was- he said, look, "Once I do these kind of transformations to improve the semantic representation of my word vectors, look this really interesting property emerges. Um, that what you find is that there is semantic vectors are which basically linear components in my carefully-constructed space. So, here we have the sort of, um, verb to the doer of the verb direction, drive, driver, um, clean, janitor, swim, swimmer, learn, teacher or teach, teacher, doctor, treat, priest, pray. I mean, you know, it's not exactly perfect, you know, there's a little bit of wiggle there, right? But, you know, roughly it's completely clear that there's sort of a direction in the space that corresponds to- from a verb to the doers of a verb. Um, and yeah, so he [inaudible] - he- no one had thought of this idea of doing the analogies and tests. But the thing in retrospect that's obvious is, if you can construct a vector space that has this linearity property, then you're definitely gonna do well in analogy. So, effectively he had invented a vector space that do well in analogies because this means that you've got this direction which is the doer and then you can immediately say that's the doer vector which you can get from subtracting clean from swimmer. And the- Right. So, it's clean from janitor. And then we can add it on to swim and we'll get somewhere close to swimmer. Um, so his space actually did do that. And so, um, this is- so the, the moral in some sense is, if you have- if you kind of do carefully control accounts and so on, that conventional methods can also give you good word vector spaces and- I mean, so that was actually the starting off point for our work on GloVe. Um, so that essentially, there had been these two schools of work. Um, there had been the school of work that had been explored more in COG psych than anywhere else, which had been based on counting and transforming counts. And, you know, it had some advantages or it seemed it had some advantages, right? That, um, you're making sort of efficient use of statistics as you're using the global statistics of the whole matrix directly to estimate things. Um, and at that poi- up until then, it had really only being used to capture word similarity, um, and a lot of it had suffered from disproportionate im- importance given to large counts. But Doug Rohde, he had sort of started to show how to solve both of these problems. And so on the other hand, there had been these neural network methods which are kind of direct prediction methods that we're defining that probability distribution and trying to predict the words that occur. And they had some advantages, right? The fact that your sampling means that you're not going to run out of memory hopefully. I know we've had some memory problems with homework one, but in principle, you're not as bad memory position and if you have to construct a huge matrix because you're going linearly, um, but, you know, since you're doing it sample by sample it's inefficient use of statistics, um. Okay. And so, but on the other hand Mikolov's work it performed perfectly. Not perfectly, but really well. Um, so this is sort of led into this work, um, that Jeffrey Pennington, um, Richard Socher [inaudible] can we sort of combine these ideas and sort of have some of the goodness of the neural net methods, um, while trying to do things with some kind of count matrix. And so in particular, um, we wanted to get the result in a slightly less hacky way that you want to have components of meaning being linear ope- linear operations in the vector space that they're just some effective or adding or something like this. And so the crucial observation of this model was that we could use ratios of co-occurrence probabilities to encode meaning components. And so the idea here is, if you have a word like ice and you say how often the thing's going to co-occur with that, well solid should co-occur a lot and gas shouldn't. But well water is also going to co-occur a lot and some random word won't occur much. If you have, oops. If you have steam, you get the opposite pattern with solid and gas, right? But so the thing to notice is, it's not enough to just have large by itself because large appears both here and here or small appears there and there, the thing that's interesting and sort of the difference between these components in there indicating a meaning component. And so we can get it that if we look at the ratio of co-occurrence probabilities. And so for the ratio of co-occurrence probabilities this is a dimension of meaning and where for other words and this sort of ratio cancels out to about one. And so in this slide I've moved so it's not how my small and large that these are actually actual counts from a corpus. So we roughly get dimension of meaning between solid and gas are the ones coming out as about one because they are not the dimension of meaning. And so, it seems like what we want is we want to have ratio of co-occurrence probabilities become linear and our space. And then we're in a good business. And so that's what we want to set about doing. Well, how can you do that? Well, the way you can do that, is by if you can make the dot products equal to the log of the co-occurrence probability, then immediately you get the fact that when you have a vector difference it turns into a ratio of the co-occurrence probabilities. And so, essentially the whole of the model is that we want to have dot products or logs of co-occurrence probabilities. And so, that's what we do. So, here is our objective function here and it's made to look a little bit more complicated. But essentially we've got this squared loss here and then we wanting to say the dot-product should be as similar as possible to the log of co-occurrence probability and so you'll they'll be lost to the extent that they're not the same, but we kind of complexify it a little by putting in bias terms for both of the two words. Because maybe the word is just overall common and likes to co-occur things or uncommon or does end. And then we do one more little trick because every [inaudible] does tricks to make the performance better is that we also use this f-function in front, so that we're sort of capping the effect that very common word pairs can have on the performance of the system. Okay. And so that gave us the GloVe model of word vectors. And theoretically, the interest of this was, you know, a lot of the preceding literature had been there had been these count methods and there had been these prediction methods. And the hope was that this could sort of unify the two by showing you how you could have a method that is estimated simply of a count matrix but it's done in the same kind of iterative loss based estimation method that's used for the neural methods to get good word vectors. And this also worked to give good word vectors. So here's GloVe results for the word frog. And frogs and toad are obvious. But there are these different kinds of words, uh, various kinds of pretty tree frogs and things like that. Okay. Um, so I'll then go from here and say a little bit more about some of the work on evaluating word vectors. And this is maybe also a chance just talk a little bit about evaluation altogether. So, normally in NLP when we do a valuation, the first thing that comes up is intrinsic versus extrinsic evaluation. So, normally if there's something we trying to do like model, um, word similarity with word vectors or we're trying to, um, put parts of speech on words or something, we can just have an intrinsic evaluation of saying how good a job did you get. Are you guessing the right part of speech? Are you putting synonyms close together? And that's sort of normally very easy to do and fast to compute. And it's useful to do because it helps us understand the system. On the other hand, a lot of the time those intrinsic evaluations, it's not very clear where- where they're having done well on that task is really going to help us build the amazing natural language understanding robots that we so ardently desire. Um, so, people are also very interested in extrinsic evaluations. And so extrinsically is then saying well suppose you use this new stuff in a real system doesn't make performance go up. And it's then sort of definitional what counts to you as a real system that normally that's meaning it's some application that human beings actually care about and liked to use. So that's something like web search, or question answering, or phone dialog system or something like that, um, hat you can put it into that system and the numbers get- go up. So, that seems what you want to do. You want to have stuff that works in real tasks. Of course, there are sort of on the other hand a lot of things are a lot harder than. So much more work to do such an evaluation and run different variance of a system. And even when the results, uh, poor or great sometimes it's hard to diagnose. You know, if- if your great new word vectors don't work better in the system, you know, it might be for sort of some extraneous reason about how the system was built at sort of hiding all your magic. And if you just change the rest of the system and suddenly show its good effects. So, it's kind of hard to do, um, sort of, um, apportionment of goodness and badness Okay. So, um, so, today I'm mainly going to say a little bit more about these intrinsic word vector evaluations that we've talked about. So we've talked quite a bit about these analogies. So if we're actually working out the analogies, it turns out that normally what people are doing is working out a cosine distance and angle between, um, different word candidates, um, to work out which is the word that solves the analogy which is an Norbert little tiny wrinkle of difference there. And there's also one other trick that people commonly use. They forbid the system from returning one of the three word she put into the analogy Okay. But nevertheless, so, this is something that you can evaluate. Here now some GloVe visualizations. And so these GloVe visualizations show exactly the same kind of linearity property that Doug Rohde discovered which means that analogy's work. Sort of by construction, because our vector space wanted to make meaning components linear. So, this is then, um, showing a gender display. This is showing one between companies and their CEOs, kind of cool. And you can also do more syntactic facts. So this is showing, um, positive comparative and superlative of adjectives. Yeah. So, Tomas Mikolov came up with this idea of doing these analogy tasks. And so he built a data-set with a lot of analogies in it. It's sort of- it's a bit of a weirdo data-set because it's sort of tests a few random different things which may have been things that his system worked well on, um, but you know, it tests countries and capitals, country, cities and states, countries and currency. So there are a bunch of semantic things that tests. And then there are some, um, syntactic things that tests so bad, worst, fast fastest for superlatives. But, you know, even some of the ones I was showing before, you know, there's no- there's no Obama is to Clinton kind of ones that are actually in this evaluation set. Um, here's a big table of results, um, that comes from our GloVe paper. So not surprisingly the GloVe paper perform best in this evaluation. Because that was our paper. Um, [LAUGHTER] [LAUGHTER] But I mean perhaps- you know, perhaps the things to start to notice is, yeah, if you just do a plain SVD on counts. You know that that works abominably badly for these, um, analogy tasks. But, you know, kind of as Doug Rohde showed, if you start then doing manipulations of the count matrix before you do an SVD, you can actually start to produce an SVD based system that actually performs quite well on these tasks. Um, you know, not badly against other things. Um, other things that you will discover, right at the top there are a 100 dimensional ones, and at the bottom there are some 1000 dimensional ones, and other 300 dimensional ones. At least when you're training on a big amount of text, bigger dimensionality definitely works better. And I'll come back to that in a minute. Um, the amount of text makes a difference as well, right? So we're going up from- so one to 1.5 billion words at the beginning, to these ones down here are being trained over 42 billion words of text, and perhaps unsurprisingly, the 42 billion words of texts ones work better. Um, so it's big data. Um, here are a couple more steps from this paper. So this is a graph of dimensionality and what the performance is. So for the three lines the green one's semantic, the blue one's the syntactic analogies and so red's the overall score. So sort of what you see is up to dimensionality 300 things that clearly increasing quite a bit, and then it gets fairly flat, which is precisely why you find a lot of word vectors, um, that are of dimensionality 300. Um, this one's showing what window size. So this is sort of what we talked about symmetric on both sides window size, and as it goes from 246810. And sort of what you see is, if you use a very small window like two, that actually works. That the, the syntactic prediction is stronger because well, syntactic effects are very local. Whereas as you go out, the semantic prediction gets better and better. Actually this syntactic gets a bit better as well, but it's especially the semantic that gains. Um, the right graph shows that if you only use context on one side, um, your numbers aren't as good. Okay, um, so, I sort of just wanted to sort of sneak in a little cameos of a couple of, um, recent bits of work, as sort of a first of what things people are doing, um, with word vectors. Um, so this one, um, was actually by two Stanford people. Um, now the best- this would be the best story. If I could say that this was a final project, um, in this class last year, but unfortunately that's not true. This paper has nothing to do with this class [LAUGHTER]. But it-- right. Um, Zin Yin and Yuanyuan, um, actually had, um, some sort of clever and very mathy ideas, where they're using matrix perturbation theory. Um, and sort of just showing how, um, dimensionality in word vectors actually sort of feeds into the bias-variance trade-off. If you've seen that, um, in other parts of machine learning. And I'm not even going to attempt to explain their paper. Um, but here it is, that they did really well with this paper, they gone all talk in Europe's from it. Um, and so- but there's sort of an interesting result of what you see with these word vectors, which is in a way kind of surprising. So this is showing doing word vector dimensions from zero up to 10,000. So we're going way higher than we talked about before. And so what you discover which people have known for ages is, that there's sort of a little blip that somewhere around two or 300, which seems to optimize performance. So, I've used those sizes. But the thing that they were sort of doing a lot of their theory about, and it's kind of surprising is, well, surely if you have a humongous humongous number, like, if you are using 10,000, um, dimensional vectors, you know, you're trying to estimate another two orders of magnitude more numbers for every word, surely things should just fall apart, um, because you've got hopelessly many parameters relative to the amount of training data that you're trying to estimate these numbers from. And so the interesting result that they show is, that things don't fall apart. Um, and that you can essentially go out to these huge huge dimensionalities, and the performance stays flat. And that they've got a lot of theory, sort of for predicting why that that's actually going to end up being the case. Um, yeah. So for training these models iteratively, this is- orange is showing, um, GloVe training. You know, they keep on getting better for a while. So you know, just go out, go sleep and see in the morning how it's doing, right? So that if you were running it, um, for 24 hours your numbers are better than if you only ran it for six hours. Um, and that's true for a lot of deep learning models, sorry. So this is the key reason why you don't want to start your assignment the night before it's due. Because even if you program it perfectly, you might just not have enough time for it to run, um, so that you produce good numbers at the end of it. Um, okay. Uh, yeah so, so couple of more, um, things, on that, um. Yes. So, um, what are we showing here? So these are again semantics in tactic and overall numbers. So there are sort of two things that are sort of being mixed together here. One is, if we just look at the overall numbers, they're highest over here, um, which is this 42 billion Common Crawl web-pages corpus, that gives us the highest overall number. But there's sort of something else that's interesting in this graph, which is, um, that using Wikipedia works frequently well. So that you actually find that 1.6 billion tokens of Wikipedia works better than 4.3 billion tokens of News-wire newspaper article data. And so I, I think that's sort of actually make sense, which is well, you know, the job of encyclopedias is to just sort of explain concepts and how they relate to each other, right? So that encyclopedias are just much more expository text that show all the connections between things, whereas newspapers in general aren't trying to expose at how things fit together. They're just telling you about, you know, who got shot dead last night or something like that, right? So, um, so this is sort of interesting fact, um, that this Wikipedia data kind of really, it sort of is differentially useful, um, for, um, making word vectors. And you know, in fact, you know, when we did very well without GloVe word vectors and lots of people use those. You know, I think actually one of the reasons why they work so well is that the original word2vec vectors that Google distributes are built only on Google News data, where else sort of have this, um, Wikipedia data inside them. Okay, um, rushing ahead. Um, yes, so the- there's all the work on analogy, but the other more basic evaluation is this one of capturing similarity judgments. And I haven't said much about this, but you know, there is this sort of large sub-literature in the psychology community, where people have wanted to model humans judgments of similarity. So like a good psych person, what you do, is you find your classroom of Psych one undergrads, and you show them pairs of words and say rate these things for similarity on a scale of one to 10. And lots of that data has been collected, and you work out the mean over human beings, and they give numbers like this of tiger and cat, 7.35. Tiger's similar to Tiger 10, book and paper, plane and car, stock and phone, stock and CD, and you get numbers. So then, what we're doing is wanting to say, well let's use distance in the space to map directly onto these similarity judgments, and how well does it map? And so that's sort of similarity judging has also then being used for evaluating these systems. So again, here are a lot of models. This is again from our GloVe paper. But so there are these various similarity data-sets. So one of the best-known ones that I had on the slide before is this, um, Wordsim 353. It has 353, um, different ones in it, and so you are sort of then modeling a correlation between your judgments of similarity and the ones that came from the human beings. Okay. Two more things I want to say. Um, yes. So, we had that problem right at the beginning of Clinton and how that could be various people. And that's perhaps in some sense the simplest case of words being ambiguous, when you have names which have reference to different people. Um, but it's not only true of names. So by and large, words in human languages are ambiguous and have lots of meanings. Um, that's especially true of common words. They always have lots of meaning. It's especially true of words that have existed for a long time. It's not true of new very technical words, you know, carcinoma. I think that only has one meaning. Um, but, you know, if you think of any relatively, um, common word and starts, um, scratching your head for a moment, you'll find it has lots of meanings. I- maybe this isn't even such a common word, but my random word I've got here is Pike. Um, pike has lots of meanings, it has meanings like? Fish. Fish, it's a kind of fish, yeah. So there's a fish that's a pike. What else is a pike? A large spear. A large spear. Yes, so a large spear is a pike. Other kinds of pike's? Gymnastics move. It's a road. Gymnastics move or in diving move. It's a road. Um, yeah. Um, so there are lots of meanings. Um, there are other meanings. Um, in Australian English, pike is also used as a verb to mean, um, to pull out from doing something. Like, "We were all going to go out to a nightclub later, but Joe piked." [LAUGHTER] Um, I don't think that usage is common in this country, but, um, you can try that, um. [LAUGHTER] Right. But lots of meanings and, you know, this isn't only true of the word pike, right? Pick any other simple word, right? You can pick a word like shell or field or house or make, you know, they have lots of meanings when it comes down to it. So, you know, but, uh, how can this work if we just have one meaningful words? And that's the interesting question and it was something that [NOISE] we were actually interested in early on. So, I'm even before the Word2Vec paper came out back in 2012, um, we were playing around, um, with neural word vectors and, um, we thought, boy this is so broken having only one, um, cents, for a word. Why don't we come up with a model that has multiple sensors for a word? And so we did that and we did it in a pretty crude way, I guess, [NOISE] um, the way we did it is say, well, let's for each common word, let's cluster all the contexts in which it occurs. And then we'll see if there seem to be multiple clear clusters by some criterion for that word. And if so, we'll just sort of split the word into pseudo words. So, if it seems like that there are five clusters, um, for the word, the example I meant to use here is jaguar. Five clusters for the word jaguar, I will just call them jaguar_1, jaguar_2, jaguar_3, four, five, so it's just literally changed the word in our corpus according to its cluster number. And then we run our word vectoring algorithm and so we get a representation that each of those sensors of the word. And basically, that works, right up the top is jaguar_1 next, uh, luxury and convertible. Um, here is, I guess there's a very old version of MacOS called Jaguar, any remem- remember that one? Um. Right. So, jaguars right next to software and Microsoft up there, so that's hopeful. Um, here's the jaguar that's right next to the Hunter, um, and I'm being confused on this one, is jaguar as near solo musical keyboard and string. Is there a band, [NOISE] a brand of keyboard called jaguar? I'm not quite sure about that one, but anyway, it's sort of basically works. Um, but that was sort of crude and it's also perhaps problematic, so a lot of time, the divisions between sensors aren't very clear, right? A lot of sensors are actually related to each other and overlapping because when how sensors normally arrive is that people stretch the meanings of words. It's not that they just sort of randomly wake up the next morning and say, "I know carpet. I could also refer to that as stone," um, and given a new sense to the word stone, right? You so take something that you know about like a web and you extend it metaphorically to other uses of webbing. Um, so here's a perhaps more interesting things, so this is the other Sanjeev Arora, um, paper that I was going to mention. So, that what happens if you don't, um, if you don't have more than one cents for each word? Well, effectively what you get is that the word vector that you learn is what's referred to by physicists and fancy people as a superposition of the word vectors of the different sentence, different sensors. By supersitio- superposition just means a weighted average. Um, um, [LAUGHTER] so that effectively my meaning of pike is sort of a weighted average of the vectors for the different sensors of pike, and the components are just weighted by their frequency. Um, so that part maybe is perhaps not too surprising, but the part that's really surprising is well, if we just averaging these word vectors, you'd think you couldn't get anything out of the average, right|? Like if I tell you I'm thinking of two numbers and they're here, weighted sum is 54, what are my two numbers, right? You are sort of really short of information to be able to answer my question. But, well, you know, for these word vectors, um, we have these high dimensional spaces and even though there are a lot of words that the space is so vast for thoughts dimensions, that actual words or sensors are very sparse in that space. And so it turns out that there's this whole literature on, um, sparse coding, compressed sensing, um, some of which has actually done by people in the stats department here, um, which shows that in these cases where you have these sort of sparse, um, codes in these high dimensional spaces, you can actually commonly reconstruct out the components of a superposition, even though all you've done is sort of done this weighted average, and so, um, this paper looks at how you can do this and so they have, um, these underlying meaning components, and they sort of separated out. So, tie has one meaning component, there's in this space of trousers, blouse, waistcoat, that makes sense, and other one in this meaning component of seasoned teams, winning league, makes sense. Um, scoreline goal with equalizer clinching scorers, this one seems to overlap with this one a bit. Um, but here tie, this is sort of cable ties and wire ties and things like that. So, they are actually able to pull out the different sense meanings, um, from outside, out of the meaning of the word. Um, so that is a kind of a cool thing. I just wanna, um, say one more thing. Okay. [NOISE] All the evaluations so far was intrinsic, um, you also might wanna do extrinsic evaluation. Why, why word vectors excited people on NLP so much? Is it turned out that having this meaning, having this representation meaning just turned out to be very useful and sort of improve all of your tasks after that. Um, and so, um, this is doing named entity recognition which is labeling persons and locations and organizations, but, you know, it's typical of many tasks of what people found, was if you started with a model without sort of word representations and you throw in your word vectors regardless of whether they were to vehicle GloVe ones, just kind of your numbers go up a couple of percent or more? And so word vectors were just sort of this useful source that you could throw into any NLP system that you build and your numbers went up. So, that there was just a very effective technology, um, which actually did work in basically any extrinsic tasks you type tried it on. Okay. Thanks a lot.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_6_Language_Models_and_RNNs.txt
Hi, everyone. I'm Abby, I'm the head TA for this class and I'm also a PhD student in the Stanford NLP group. And today I'm gonna be telling you about language models and recurrent neural networks. So, here's an overview of what we're gonna do today. Today, first, we're going to introduce a new NLP task, that's language modelling, and that's going to motivate us to learn about a new family of neural networks, that is recurrent neural networks or RNNs. So, I'd say that these are two of the most important ideas you're going to learn for the rest of the course. So, we're going to be covering some fairly cool material today. So, let's start off with language modeling. Language modeling is the task of predicting what word comes next. So, given this piece of text the students opens their blank, could anyone shout out a word which you think might be coming next? Purpose. [NOISE]. [OVERLAPPING] Mind, what else? I didn't quite hear them, but, uh, yeah, these are all likely things, right? So, these are some things which I thought, students might be opening, uh, students open their books, seems likely. Uh, students open their laptops, students open their exams, Students open their minds, incredibly, someone came up with one, that one just now, uh, it's kind of a metaphorical meaning of opening. So, you are all performing language modeling right now. And thinking about what word comes next, you are being a language model. So, here's a more formal definition of what a language model is. Given a sequence of words X1 up to Xt, a language model, is something that computes the probability distribution of the next word, Xt plus 1. So, a language model comes up with the probability distribution, the conditional probability, of what X t plus 1 is given the words it found. And here we're assuming that, Xt plus 1 can be any word w from a fixed vocabulary V. So we are assuming that there is a pre-defined list of words that we're considering. In this way, you can view language modeling as a type of classification task, because there's a predefined number of possibilities. Um, we call a system that does this a language model. There's an alternative way of thinking about a language model as well. You can think of a language model as a system which assigns probability to a piece of text. So, for example, if we have some piece of text, X up to X capital T, then, the probability of this text according to the language model can be broken down. So, just by definition, you can say that the probability is equal to, the product of all of these conditional probabilities. And, uh, the form inside, the products is exactly what a language model provides. So, you can think of these things as somewhat equivalent. Predicting next words, gives you a system, that can give the probability of a given piece of text. So, in fact, you, use language models every day. For example, when you're texting on your phone and you're writing a message, then most likely if you have a smartphone, it will be predicting what word you might be about to say. So, if you say, um, I'll meet you at the- your phone might suggest perhaps you mean airport or cafe, or office, for example. Another situation which you use language models every day is when you search for something on the internet, for example, Google, and you start typing your query, then Google tries to complete your query for you, and that's language modeling. It's predicting what word or words might come next. So, that's what a language model is, and the question is, how would you learn a language model? So, if I was to ask that question in the pre- deep learning era, which was really only a few years ago, the answer would be, you would learn a n-gram language model. So, today first we're going to learn about n-gram language models. So, before I can tell you what a n-gram language model is, you need to know what an n-gram is. So, by definition an n-gram is a chunk of n consecutive words. So, for example, a one gram or unigram, is just all of the individual words in the sequence that would be "the students open the-" A two gram or bigram would be all of the consecutive chunks of pairs of words, "the students", "students opened", "opened their" and so on for trigrams and four-grams, etc. So, the core idea of an n-gram language model is that in order to predict what word comes next, you're going to collect a bunch of statistics, about how frequent different n-grams are, from some kind of training data, and then you can use those statistics to predict what next words might be likely. Here is some more detail. So, to make an n-gram language model, first you need to make a simplifying assumption, and this your assumption. You say that the next word Xt plus 1 depends only on the preceding N-1 words. So, what we're assuming, is that the probability distribution, the conditional probability of Xt plus 1 given all of the words they follow, we're just going to simplify that, and say it only depends on the last N-1 words, and that's our assumption. So, by the definition of conditional probability, we can say that this probability, is just the ratio of two different probabilities. So, on the top, you've got the probability of a particular n-gram and on the bottom we've got the probability of a particular N-1 gram This is a little hard to read because of all the superscripts but I'm gonna give an example with words on the next slide. Okay. So, that's the definition of the probability of the next word, but the question remains, how do we get all of these n-gram and N-1 gram probabilities? So, the answer is, we're going to get them by counting them in some large corpus of text. So, we're going to approximate, these probabilities just by the count of the number of times that these particular n-grams and N-1 grams appeared in our training corpus. Okay. So, here's an example with some words. Suppose we are trying to learn a 4-gram language model, and suppose that we have a piece of text, that says, "As the proctor started the clock, the students opened their blank", and we're trying to predict what word is coming next. So, because we're learning a 4-gram language model, a simplifying assumption is that the next word depends only on the last three words, last N-1 words. So, we're going to discard all of the context so far except for the last few words, which is, "Students opened their." So, as a reminder, n-gram language model says that, the probability of the next word being, some particular word W in the vocabulary is equal to the number of times we saw students opened their W divided by the number of times we saw students opened their, in the training corpus. So, let's suppose that in our training corpus, we saw the phrase "students open their" 1,000 times. And suppose that, we saw "students opened their books" 400 times. This means that the probability of the next word being books is 0.4. And uh, similarly, let's suppose that we saw students open their exams 100 times, this means that the probability of exams given students open their is 0.1. Is there a question? [inaudible]. The question is, does the order of the words matter? And the answer is yes, the order of students open there does matter. It's different to "the students opened." So, the question I want to raise now is, was it a good idea for us to discard the proctor context? If you look at the actual example that we had, the example was as the proctor started the clock, the students opened their blank. So, do we think that books or exams is more likely given the actual context, the full context? Yep. Exams. Right. Exams is more likely because the proctor and the clock heavily implies that it's an exam scenario, so they're more likely to be opening the exams than the books, unless it's an open book exam. Uh, but I think, overall, it should be exams. So, the problem that we're seeing here is that in the training corpus, the fact that students were opening something means that it's more likely to be books than exams because overall, books are more common than exams. But if we know that the context is, the proctor and the clock, then it should be exams. So, what I'm highlighting here is a problem with our simplifying assumption. If we throw away too much context, then we are not as good as predicting the words as we would be if we kept the context. Okay. So, that's one problem with n-gram, uh, language models. Uh, there are some other problems as well. So, uh, here again is the equation that you saw before. One problem which we're gonna call the sparsity problem is what happens if the number on top, the numerator, what if that count is equal to zero. So, what if for some particular word W, the phrase students opened their W never occurred in the data. So, for example, let's suppose students opened their petri dishes, is fairly uncommon and it never appears in the data, then that means our probability of the next word being petri dishes will be zero. And this is bad, because it might be uncommon but it is, a valid scenario, right? If you're a biology student for example. So, this is a problem and we call it the sparsity problem, because the problem is that if we'd never seen an event happen in the training data, then our model assigns zero probability to that event. So, one partial solution to this problem is that maybe we should add a small delta, small number delta to the count, for every word in the vocabulary. And then this way, every possible word that come next, has at least some small probability. So, petri dishes will have some small probability, but then so, will all of the other words which are possibly bad choices. So, this, uh, technique is called smoothing, because the idea is, you're going from a very, uh, sparse probability distribution, which is zero, almost everywhere, with a few spikes where there's, uh, being n-grams that we've seen, it goes from that to being a more smooth probability distribution where everything has at least a small probability on it. So, the second sparsity problem which is possibly worse than the first one is, what happens if the number in the denominator is zero? So, in our example, that would mean, what if we never even saw the trigram "students opened their" in the training data. If that happens, then we can't even calculate this probability distribution at all for any word W because we never even saw this context before. So, a possible solution to this is that if you can't find "students open their" in the corpus, then you should back off to just conditioning on the last two words, rather than the last three words. So, now you'd be looking at times when you'd seen, uh, "open their" and seeing what what's come next. So, this is called back-off because in this failure case, for when you have no data for your 4-gram language model, you're backing off to a trigram language model. Are there any questions at this point? Okay. So, um, another thing to note is that these sparsity problems get worse if you increase N. If you make N larger in your n-gram language model, and you might want to do this, for example, you might think, uh, I want to have a larger context, so I can pay attention to words that happened longer ago and that's gonna make it a better predictor. So, you might think making N bigger is a good idea. But the problem is if you do that then the sparsity problems get worse. Because, let's suppose you say, I want a 10-gram language model. Then the problem is that you're going to be counting, how often you seen process in 9-grams and 10-grams. But 9-grams and 10-grams, there's so many of them, that the one you are interested in probably never occurred, in your training data which means that the whole thing becomes dysfunctional. So, in practice, we usually can't have N much bigger than five. Okay. So, that was, uh, two sparsity problems with n-gram language models. Here is a problem with storage. So, if we look at this equation, uh, you have to think about what do you need to store in order to use your n-gram language model. You need to store this count number, for all of the n-grams that you observed in the corpus when you were going through the training corpus counting them. And the problem is, that as you increase N, then this number of n-grams that you have to store and count increases. So, another problem with increasing N is that the size of your model, or your n-gram model, uh, gets bigger. Okay, so n-gram Language Models in practice. Let's look at an example. You can actually build a simple trigram Language Model over a 1.7 million word corpus, uh, in a few seconds on your laptop. And in fact, the corpus that I used to do this was the same one that you met in assignment one. It's Reuters' corpus which is, uh, business and financial news. So, if you want to do this yourself, you can follow that link at the bottom of the slide later. So, uh, this is, uh, something which I ran on my laptop in a few second. So I gave it the context of the bigram today the, and then I asked the trigram Language Model what word is likely to come next. So, the Language Model said that the top next most likely words are company, bank, price, Italian, emirate, et cetera. So already just looking at these probabilities that are assigned to these different words, uh, you can see that there is a sparsity problem. For example, the top two most likely words have the exact same probability and the reason for that is, that this number is 4 over 26. So these are quite small integers, uh, meaning that we only saw, uh, today the company and today the bank four times each. So, uh, this is an example of the sparsity problem because overall these are quite low counts, we haven't seen that many different, uh, versions of this event, so we don't have a very granular probability distribution. But in any case ignoring the sparsity problem, I would say that overall, these, uh, top suggestions look pretty reasonable. So you can actually use a Language Model to generate text and this is how you would do it. So let's suppose you have your first two words already, uh, you condition on this and you ask your Language Model what's likely to come next. So then given this probability distribution over the words, you can sample from it, that is, select some words with, you know, the associated probability. So let's suppose that gives us the word price. So then price is your next word, and then you just condition on the last two words, which in this ex- example is now the price. So now you get a new probability distribution and you can continue this process, uh, sampling and then conditioning again and sampling. So if you do this long enough, you will get a piece of text, so this is the actual text that I got when I run this generation process with this trigram Language Model. So it says, "Today the price of gold per ton, while production of shoe lasts and shoe industry, the bank intervened just after it considered and rejected an IMF demand to rebuild depleted European stocks, September, 30th end primary 76 counts a share.'' Okay. So, uh, what do we think about this text? We think it's good? We, uh, surprised? Um, I would say that in some ways it is good, it's kind of surprisingly grammatical, you know, it mostly, uh, kind of pauses, uh, but you would definitely say that it, it doesn't really make any sense. It's pretty incoherent. And we shouldn't be surprised that it's incoherent I think because if you remember this is a trigram Language Model, it has a memory of just the last well, three or two words depending on how you look at it. So clearly we need to consider more than three words at a time if we want to model language well. But as we already know, increasing n makes the sparsity problem worse, n-gram Language Models, and it also increases model size. Is that a question? How does it [inaudible] [NOISE] So the question is, how does the n-gram Language Model know when to put commas. Uh, so you can, [NOISE] decide that commas and other punctuation are just another kind of word, is that well or token, and then, to the Language Model it doesn't really make much difference. It's just used that as another possible world that can be, um, predicted, that's why we've got the weird spacing around the, the commas is because it was essentially viewed as a separate word. [NOISE] Okay. So this course is called NLP with Deep Learning. So you probably thinking how do we build a neural Language Model? So let's just recap, uh, in case you forgot. Remember that a Language Model is something that takes inputs which is a sequence of words X1 up to Xt, and then it outputs a probability distribution of what the next word might be Xt plus 1. Okay, so when we think about what kind of neural models we've met in this course so far. Uh, we've already met window-based neural models. And in lecture three, we saw how you could apply a window-based neural model to a named entity recognition. So in that scenario you take some kind of window around the word that you care about which in this example is Paris, and then, uh, you get the word embeddings for those, concatenate them put them through some layers, and then you get your decision which is that Paris is a location not, you know, a person or organization. So that's a recap of what we saw in lecture three. How would we apply a model like this to language modeling? So here's how you would do it. Here's an example of a fixed-window neural language model. So, again, we have some kind of context which is, as the proctor started the clock the students opened their, um, we're trying to guess what word might come next. So we have to make a similar simplifying assumption to before. Uh, because it's a fixed size window, uh, we have to discard the context except for the window that we're conditioning on. So let's suppose that our fixed window is of size four. So what we'll do is similarly to the, ah, NER model. We're going to represent these words with one-hot vectors, and then we'll use those to look up the word embeddings for these words using the, uh, embedding lookup matrix. So then we get all of our word embeddings E,1, 2, 3, 4, and then we concatenate them together to get e. We put this through a linear layer and a nonlinearity function f to get some kind of hidden layer, and then we put it through another linear layer and the softmax function and now we have an output probability distribution y hat. And in our case because we're trying to predict what word comes next, ah, ah, vector y hat will be of length v where v is the vocabulary and it will contain the probabilities of all the different words in the vocabulary. So here I've represented that as a bar charts where if you suppose you've got all of the words listed alphabetically from a to z, and then there's the different probabilities of the words. So if everything goes well, then this language model should tell us that some likely next words are books and laptops, for example. So none of this should be, um, unfamiliar to you because you saw it all last week. We're just applying a Window-based model to a different task, such as language modeling. Okay, so what are, some good things about this model compared to n-gram language models? So one, ah, advantage I'd say is that there's no sparsity problem. If you remember an n-gram language model has a sparsity problem which is that if you've never seen a particular n-gram in training then, you can't assign any probability to it. You don't have any data on it. Whereas at least here you can take any, you know, for example, 4-gram you want and you can feed it into the, ah, the neural nets and it will give you an output distribution of what it thinks the next word would be. It might not be a good prediction but at least it will, it will run. Another advantage is you don't need to store all of the observed n-grams that you ever saw. So, uh, this an advantage by, uh, comparison you just have to store all of the word vectors for all the words in your vocabulary. Uh, but there are quite a lot of problems with this fixed-window language model. So here are some remaining problems: Uh, one is that your fixed window is probably too small. No matter how big you make your fixed window, uh, you're probably going to be losing some kind of useful context that you would want to use sometimes. And in fact, if you try to enlarge the window size, then you also have to enlarge the size of your, uh, weight factor, sorry, your weight matrix W. Uh, so the width of W because you're multiplying it by e which is the concatenation of your word embeddings. The width of W grows as you increase the size of your window. So in inclusion really your window can never be large enough. Another problem with this model which is more of a subtle point is that X1 and X2 and really all of the words in the window they're, uh, multiplied by completely diffe rent weights in W. So to demonstrate this you could draw a picture. So the problem is that if you have your weight matrix W and then you have your concatenation of embeddings e and we have, uh, four embeddings. So we have e_1, e_2, e_3, e_4, and you multiply, uh, the concatenated embeddings by the weight matrix. So really you can see that there are essentially kind of four sections of the weight matrix, and the first word embedding e_1 is only ever multiplied by the weights for it in this section, and that's completely separate to the weights that multiply by e_2 and so forth. So the problem with this is that what you learn in the weight matrix in one section is not shared with the others. You're kind of learning a lot of similar functions four times. So the reason why we think this is a problem is because there should be a lot of commonalities in how you process the incoming word embeddings. So what you learn about how to process, you know, the third embedding, some of it at least should be shared with all of the embeddings. So what I'm saying is it's kind of inefficient that we're learning, uh, all of these separate weights for these different words when there's a lot of commonalities between them. Is there a question? So that's why [inaudible] [NOISE]. Okay- Yeah, hopefully- hopefully the verbal description is on. So, in conclusion, I'd say that the biggest problem that we've got with this fixed-size neural model is that clearly we need some kind of neural architecture that can process any length input, because most of the problems here come from the fact that we had to make this simplifying assumption that there was a fixed window. Okay. So this motivates, uh, us to introduce this new family of neural architecture, it's called recurrent neural networks or RNNs. So, this is a simplified diagram that shows you the most important, um, features of an RNN. So we have again an input sequence of X1, X2, et cetera, but you can assume that this sequence is of any arbitrary length you like. The idea is that you have a sequence of hidden states instead of just having, for example, one hidden state as we did in the previous model. We have a sequence of hidden states and we have as many of them as we have inputs. And the important thing is that each hidden state ht is computed based on the previous hidden state and also the input on that step. So the reason why they're called hidden states is because you could think of this as a single state that's mutating over time. It's kind of like several versions of the same thing. And for this reason, we often call these time-steps, right? So these steps that go left to right, we often call them time-steps. So the really important thing is that the same weight matrix W is applied on every time-step of this RNN. That's what makes us able to process any length input we want. Is because we don't have to have different weights on every step, because we just apply the exact same transformation on every step. So additionally, you can also have some outputs from the RNN. So these y hats, these are the outputs on each step. And they're optional because you don't have to compute them or you can compute them on just some steps and not others. It depends on where you want to use your RNN to do. Okay. So that's a simple diagram of an RNN. Uh, here I'm going to give you a bit more detail. So here's how you would apply an RNN to do language modeling. So, uh, again, let's suppose that we have some kind of text so far. My text is only four words long, but you can assume that it could be any length, right? It's just short because we can't fit more on the slide. So you have some sequence of tags, which could be kind of long. And again, we're going to represent these by some kind of one-hot vectors and use those to look up the word embeddings from our embedding matrix. So then to compute the first hidden state H1, we need to compute it based on the previous hidden state and the current input. We already have the current input, that's E1, but the question is where do we get this first hidden state from? All right, what comes before H1? So we often call the initial hidden state H0, uh, yes, we call the initial hidden state and it can either be something that you learn, like it's a parameter of the network and you learn how to initialize it, or you can assume something like maybe it's the zero vector. So the formula we use to compute the new hidden state based on the previous one, and also the current inputs is written on the left. So you do a linear transformation on the previous hidden state and on the current input and then you add some kind of bias and then put it through a non-linearity, like for example, the sigmoid function. And that gives you a new hidden state. Okay. So, once you've done that, then you can compute the next hidden state and you can keep unrolling the network like this. And that's, uh, yeah, that's called unrolling because you're kind of computing each step given the previous one. All right. So finally, if you remember, we're trying to do language modeling. So we're trying to predict which words should come next after the students opened their. So on this fourth step over here, we can use, uh, the current hidden state, H4, and put it through a linear layer and put it through a softmax function and then we get our output distribution Y-hat 4 which is a distribution over the vocabulary. And again, hopefully, we'll get some kind of sensible estimates for what the next word might be. Any questions at this point. Yep? Is the- the number of hidden state or is it gonna be the number of words in your input? The question is, is the number of hidden states the number of words in your input? Yeah, in this setting here, uh, yes, or you could say more generally the number of hidden states is the number of inputs. Yep. And just as with the n-gram model, we could use the output as the input from the tasks mutation in transformational model? Yeah, so the question is, as with the n-gram language model, could we use the output as the input on the next step? And the answer is yes, and I'll show you that in a minute. Any other questions? Yeah. Are you learning the embedding? The question is, are you learning the embeddings? Um, that's a choice. You could have the embeddings be for example, pre-generated embeddings that you download and you use those and they're frozen, or maybe you could download them, but then you could fine-tune them. That is, allow them to be changed as parameters of the network or you could initialize them to, you know, small, uh, random values and learn them from scratch. Any other questions? Yeah. So you said you use the same delta matrix, like you do back propagation, does that you only update like WE, or do you update both WH and WE? So the question is, you say we reuse the matrix, do we update WE and WH, or just one? So you suddenly learn both WE and WH. I suppose I was emphasizing WH more, but yeah, they're both matrices that are applied repeatedly. There was also a question about back-prop, but we're going to cover that later in this lecture. Okay, moving on for now. Um, so, what are some advantages and disadvantages of this RNN language model? So here are some advantages that we can see in comparison to the fixed window one. So an obvious advantage is that this RNN can process any length of input. Another advantage is that the computation for step t can in theory use information from many steps back. So in our motivation example, which was as the proctor started the clock, the students opened their. We think that proctor and maybe clock are both pretty important hints for what might be coming up next. So, at least in theory, the hidden state at the end can have access to the information from the input from many steps ago. Another advantage is that the model size doesn't increase for longer inputs. So, uh, the size of the model is actually fixed. It's just WH and WE,s and then also the biases and also the embedding matrix, if you're counting that. None of those get bigger if you want to apply it to more, uh, longer inputs because you just apply the same weights repeatedly. And another advantage is that you have the same weights applied on every time-step. So I said this thing before about how the fixed-sized window neural model, it was less efficient because it was applying different weights of the weight matrix to the different, uh, words in the window. And the advantage about this RNN is that it's applying the exact same transformation to each of the inputs. So this means that if it learns a good way to process one input, that is applied to every input in the sequence. So you can see it as more efficient in that way. Okay, so what are the disadvantages of this model? One is that recurrent computation is pretty slow. Uh, as you saw before, you have to compute the hidden state based on the previous hidden state. So this means that you can't compute all of the hidden states in parallel. You have to compute them in sequence. So, especially if you're trying to compute an RNN over a pretty long sequence of inputs, this means that the RNN can be pretty slow to compute. Another disadvantage of RNNs is that it tuns out, in practice, it's quite difficult to access information from many steps back. So even though I said we should be able to remember about the proctor and the clock and use that to predict exams and our books, it turns out that RNNs, at least the ones that I've presented in this lecture, are not as good as that as you would think. Um, we're gonna learn more about both of these disadvantages later in the course, and we're going to learn something about how you can try to fix them. Have we gotten any questions at this point? Yep. Why do we assume that WH are the same? Sorry, can you speak up? Why do we assume that the WH should be the same? So the question is, why should you assume that the WH are the same? I suppose, it's not exactly an assumption, it's more a deliberate decision in the design of an RNN. So, an RNN is by definition, a network where you apply the exact same weights on every step. So, I suppose the question why do you assume maybe should be, why is that a good idea? Um, so I spoke a little bit about why it's a good idea, and this list of advantages, I suppose, are the reasons why you'd want to do that. Does that answer your question? Open their books, right? If you assume that WH are the same, you mean that like, uh, Markov chain, it's like a Markov chain. Uh, the trans- transmit, uh, trans- transfer probability for the human moods open, they are the same, but actually the Markov chain. The model, [inaudible] the transfer probability for that is the same, so [inaudible] probability, it- it's just an approximation but it's another test. Okay. So I think that [OVERLAPPING] If you assume WH could be the same, it's good because you used a number of parameters, but this is just an, this is just an approximation. The underlying transfer, uh, probability, it shouldn't be the same. Especially [OVERLAPPING] Okay. Um, so I think the question is saying that given the- these words the students opened their are all different and they're happening in different context, then why should we be applying the same transformation each time? So that's a- that's a good question. I think, uh, the idea is that you are learning a general function, not just, you know, how to deal with students, the one-word students in this one context. We're trying to learn a general function of how you should deal with a word given the word so far. You're trying to learn a general representation of language and context so far, which is indeed a very difficult problem. Um, I think you also mentioned that something about an approximation. Uh, another thing to note is that all of the hidden states are vectors, they're not just single numbers, right? They are vectors of lengths, I don't know, 500 or something? So they have quite a large capacity to hold lots of information about different things in all of their different, um, positions. So, I think the idea is that you can store a lot of different information in different contexts, in different parts of the hidden state, but it is indeed an approximation and there is some kind of limit to how much information you can store. Okay, any other questions? Yes. Since you kinda process any single length frame, what length do you use during your training? And does the length you use for training affect WH? Okay, so, the question is, given that you can have any length input, what length is the input during training? So, I suppose in practice, you choose how long the inputs are in training either based on what your data is or maybe based on, uh, your efficiency concerns so maybe you make it artificially shorter by chopping it up. Um, what was the other question? Uh, does WH depend on that? Okay. So the question was, does WH depend on the length you used? So, no, and that's one of the good things in the advantages list. Is that the model size doesn't increase for longer input, because we just unroll the RNN applying the same weights again and again for as long as we'd like. There's no need to have more weights just because you have a longer input. [NOISE] Yeah. So how the ratios that you mentioned are [inaudible] the number of words. [NOISE] Are you asking about capital E or the lowercase E? Uh, lowercase E. Okay. So, the question is, how do we choose the dimension of the lowercase Es? Uh, so, you could, for example, assume that those are just pre-trained word vectors like the ones that you, uh, used in assignment one. More like word2vec. Yeah. For example, word2vec, and you just download them and use them, or maybe you learn them from scratch, in which case, you decide at the beginning of training how big you want those vectors to be. [NOISE] Okay. I'm gonna move on for now. [NOISE] So, we've learned what an RNN language model is and we've learned how you would, uh, run one forward, but the question remains, how would you train an RNN language model? How would you learn it? [NOISE] So, as always, in machine learning, our answer starts with, you're going to get a big corpus of text, and we're gonna call that just a sequence of words X1 up to X capital T. So, you feed the sequence of words into the RNN language model, and then, the idea is that you compute the output distribution Y-hat T for every step T. So, I know that the picture I showed on the previous, uh, slide [NOISE] only showed us doing on the last step, but the idea is, you would actually compute this on every step. So, this means that you're actually predicting the probability of the next word on every step. [NOISE] Okay. So, once you've done that, then you can define the loss function, and this should be familiar to you by now. Uh, this is the cross-entropy between [NOISE] our predicted probability distribution Y-hat T and the true, uh, distribution, which is Y-hat- sorry, just YT, which is a one-hot vector, uh, representing the true next [NOISE] words, which is XT plus one. So, as you've seen before, this, uh, cross-entropy [NOISE] between those two vectors can be written also as a negative log probability. And then, lastly, if you average this cross-entropy loss across every step, uh, every T in the corpus time step T, then, uh, this gives you your overall loss for the entire training set. [NOISE] Okay. So, just to make that even more clear with a picture, uh, suppose that our corpus is, the students open their exams, et cetera, and it goes on for a long time. Then, what we'd be doing is, we'd be running our RNN over this text, and then, on every step, we would be predicting the probability [NOISE] distribution Y-hats, and then, from each of those, you can calculate what your loss is, which is the JT, and then, uh, on the first step, the loss would be the negative log probability of the next word, which is, in this example, students, [NOISE] and so on. Each of those is the negative log probability of the next word. [NOISE] And then, once you've computed all of those, you can add them [NOISE] all up and average them, and then, this gives you your final loss. [NOISE] Okay. So, there's a caveat here. Um, computing the loss and gradients across the entire corpus, all of those words X1 up to X capital T is too expensive [NOISE] because your corpus is probably really big. [NOISE] So, um, as a student asked earlier, uh, in practice, what do you actually regard as your sequence? So, in practice, you might regard your sequence as, uh, something like a sentence or a document, some shorter unit of text. So, uh, another thing you'll do [NOISE] is, if you remember, stochastic gradient descent allows you to compute gradients for small chunks of data rather than the whole corpus at a time. So, in practice, if you're training a language model, what you're actually likely to be doing is computing the loss for a sentence, but that's actually a batch of sentences, and then, you compute the gradients with respect to that batch of sentences, update your weights, and repeat. Any questions at this point? [NOISE] Okay. So, uh, moving onto backprop. Don't worry, there won't be as much backprop as there was last week, but, uh, there's an interesting question here, right? So, the, uh, characteristic thing about RNNs is that they apply the same weight matrix repeatedly. So, the question is, [NOISE] what's the derivative of our loss function, let's say, on step T? What's the derivative of that loss with respect to the repeated weight matrix WH? So, the answer is that the derivative of the loss, uh, the gradient with respect to the repeated weight is the sum of the gradient with respect to each time it appears, and that's what that equation says. So, on the right, the notation with the vertical line and the I is saying, uh, the derivative of the loss with respect to WH when it appears on the Ith step. Okay. So, so, why is that true? [NOISE] Uh, to sketch why this is true, uh, [NOISE] I'm gonna remind you of the multivariable chain rule. So, uh, this is a screenshot from a Khan Academy article on the multivariable chain rule, and, uh, I advise you check it out if you want to learn more because it's very easy to understand. Uh, and what it says is, given a function F [NOISE] which depends on X and Y, which are both themselves functions of some variable T, then, if you want to get the derivative of F with respect to T, then you need to do the chain ru- rule across X and Y separately and then add them up. [NOISE] So, that's the multivariable chain rule, [NOISE] and if we apply this to our scenario with trying to take the derivative of the loss JT with respect to our weight matrix WH, then you could view it as this kind of diagram [NOISE] where WH has, uh, a relationship with all of these individual appearances of WH, but it's a [NOISE] simple relationship, it's just equality, and then, each of those appearances of WH affect the loss in different ways. So, then, if we apply the multivariable chain rule, then it says that the derivative of the loss with respect to WH is the sum of those chain rule things, but the expression on the right is just one because it's an equality relation, [NOISE] and then, that gives us the equation that I wrote on the previous slide. So, this is a proof sketch for why the derivative of the loss with respect to our recurrent matrix is the sum of the derivatives each time it appears. Okay. So, suppose you believe me on that, that is, how you compute the, uh, gradient with respect to the recurrent weight. So, a remaining question is, well, how [NOISE] do we actually calculate this in practice? [NOISE] So, the answer is that you're going to calculate this sum by doing backprop, uh, backwards, kind of right to left, um, through the RNN, and you're going to accumulate this sum as you go. So, the important thing is, you shouldn't compute each of those things separately, uh, you should compute them by accumulating, like, each one can be computed in form- in terms of the previous one. [NOISE] So, this algorithm of computing each of these, uh, each of these gradients with respect to the previous one is called backpropagation through time. And, um, I always think that this sounds way more sci-fi than it is. It sounds like it's time travel or something, but it's actually pretty simple. Uh, it's just the name you give to applying the backprop algorithm to a recurrent neural network. Any questions at this point? Yep. [NOISE] So, it seems that how you break up the batches matter your end result. [inaudible]. So, if you break it into much more [inaudible]. Okay. So the question is, um, surely, how you decide to break up your batches affects how you learn, right? Because if you choose, uh, one set of data to be your batch, right, then, you will make your update based on that, and then, you only update the next one based on [NOISE] where you go from there. So, if you decided to put different data in the batch, then you would have made a different step. So, that's true, [NOISE] and that is why stochastic gradient descent is only an approximation of true gradient descent because the gradient that you compute with respect to one batch is just an approximation of the true gradient with respect to the, uh, the loss over the whole corpus. So, yes, it's true that it's an approximation and how [NOISE] you choose to batch up your data can matter, and that's why, for example, shuffling your data is a good idea, and shuffling it differently, each epoch, is a good idea. Uh, but the, the core idea of SGD is [NOISE] that, um, it should be a good enough approximation that over many steps, you will, uh, minimize your loss. [NOISE] Any other questions? [NOISE] Yeah. [NOISE] So, is, uh, is the question, as you compute forward prop, do you start computing backprop before you've even, like, got to the loss? Is that the question? [NOISE] Yes. I didn't think so, right? Because you need to know what the loss is in order to compute the derivative of the loss with respect to something. So, I think you need to get to the end. So, if we assume simplicity, that there is only one loss which you get at the end of several steps, then you need to get to the end, compute the loss before you can compute the derivatives. But I suppose you, you, you could compute the derivative of two, kind of, adjacent things of one with respect to the other. [OVERLAPPING] But, yeah. [NOISE] As you're going forward, do- you need to sort of keep a track of what, what you would have [inaudible] the one you eventually get the loss. [inaudible] Yes. So, when you forward prop, you certainly have to hang on to all of the intervening factors. [NOISE] Okay. I'm gonna move on for now. Uh, so, that was a maths-heavy bit but, um, now, we're getting on to text generation, which someone asked about earlier. So, um, just as we use the n-gram language model to generate text, you can also use an RNN language model to generate text, uh, via the same repeated sampling technique. Um, so, here's a picture of how that would work. How you start off with your initial hidden state H0, uh, which, uh, we have either as a parameter of the model or we initialize it to zero, or something like that. So, let's suppose that we have the first word my, and Iet's suppose I, um, supply that to the model. So, then, using the inputs and the initial hidden state, you can get our first hidden state H1. And then from there, we can compute the, er, probability distribution Y hat one of what's coming next, and then we can use that distribution to sample some word. So let's suppose that we sampled the word favorite. So, the idea is that we use the outputted word as the input on the next step. So, we feed favorite into the second step of the RNN, we get a new hidden state, and again we get a new probability distribution, and from that we can sample a new word. So, we can just continue doing this process again and again, and in this way we can generate some text. So, uh, here we've generated the text, My favorite season is Spring, and we can keep going for as long as we'd like. Okay, so, uh, let's have some fun with this. Uh, you can generate, uh, text using an RNN language model. If you train the RNN language model on any kind of text, then you can use it to generate text in that style. And in fact, this has become a whole kind of genre of internet humor that you might've seen. So, uh, for example, here is an RNN language model trained on Obama speeches, and I found this in a blog post online. So, here's the text that the RNN language model generated. "The United States will step up to the cost of a new challenges of the American people that will share the fact that we created the problem. They were attacked and so that they have to say that all the task of the final days of war that I will not be able to get this done." [LAUGHTER] Okay. So, if we look at this and especially think about what did that text look like that we got from the n-gram language model, the one about the, the price of gold. Um, I'd say that this is kind of recognizably better than that. It seems more fluent overall. Uh, I'd say it has a more of a sustained context in that it kind of makes sense for longer stretches at a time, and I'd say it does sound totally like Obama as well. So, all of that's pretty good, but you can see that it's still pretty incoherent overall, like i- it was quite difficult to read it because it didn't really make sense, right? So I had to read the words carefully. Um, so, yeah, I think this shows some of the progress you can get from using RNNs to generate text but still, um, very far from human level. Here are some more examples. Uh, here's an RNN language model that was trained on the Harry Potter books. And here's what it said. "Sorry." Harry shouted, panicking. "I'll leave those brooms in London." Are they? "No idea." said Nearly Headless Nick, casting low close by Cedric, carrying the last bit of treacle Charms from Harry's shoulder. And to answer him the common room perched upon it, four arms held a shining knob from when the Spider hadn't felt it seemed. He reached the teams too." So, again, I'd say that this is fairly fluent. It sounds totally like the Harry Potter books. In fact, I'm pretty impressed by how much it does sound like in the voice of the Harry Potter books. You even got some character attributes, I'd say that Harry the character does often panic in the book so that seems right. Um, [LAUGHTER] but some bad things are that we have, for example, a pretty long run-on sentence in the second paragraph that's hard to read. Uh, you have some nonsensical things that really make no sense. Like, I don't know what a treacle charm is. It sounds delicious but I don't think it's real, uh, and overall it's just pretty nonsensical. Here's another example. Here is an RNN language model that was trained on recipes. So, uh, [LAUGHTER] this one's pretty bizarre, the title is 'chocolate ranch barbecue', It contains Parmesan cheese, coconut milk, eggs, and the recipe says place each pasta over layers of lumps, shape mixture into the moderate oven and simmer until firm. Serve hot in bodied fresh, mustard orange and cheese. Combine the cheese and salt together the dough in a large skillet; add the ingredients and stir in the chocolate and pepper. [LAUGHTER] Um, so, one thing that I think is even more clear here in the recipes example than the prose example, is the inability to remember what's [NOISE] what's happening overall, right? Cuz a recipe you could say is pretty challenging because you need to remember the title of what you're trying to make which in this case is chocolate ranch barbecue, and you need to actually, you know, make that thing by the end. Uh, you also need to remember what were the ingredients in the beginning and did you use them. And in a recipe, if you make something and put it in the oven, you need to take it out later, a- and stuff like that, right? So, clearly it's not really remembering what's happening overall or what it's trying to do, it seems to be just generating kind of generic recipe sentences and putting them in a random order. Uh, but again, I mean, we can see that it's fairly fluent, it's grammatically right, it kind of sounds like a recipe. Uh, but the problem is it's just nonsensical. Like for example, shape mixture into the moderate oven is grammatical but it doesn't make any sense. Okay, last example. So, here's an RNN language model that's trained on paint-color names. And this is an example of a character-level language model because it's predicting what character comes next not what word comes next. And this is why it's able to come up with new words. Another thing to note is that this language model was trained to be conditioned on some kind of input. So here, the input is the color itself I think represented by the three numbers, that's probably RGB numbers. And it generated some names for the colors. And I think these are pretty funny. My favorite one is Stanky Bean, which is in the bottom right. [LAUGHTER] Um, so, it's pretty creative, [LAUGHTER] and I think these do sound kind of like paint colors but often they're quite bizarre. [LAUGHTER] Light of Blast is pretty good too. So, uh, you're gonna learn more about character-level language models in a future lecture, and you're also going to learn more about how to condition a language model based on some kind of input such as the color, um, code. So, these are pretty funny, uh, but I do want to say a warning. Um, you'll find a lot of these kinds of articles online, uh, often with headlines like, "We forced a bot to watch, you know, 1000 hours of sci-fi movies and it wrote a script," something like that. Um, so, my advice is you have to take these with a big pinch of salt, because often, uh, the examples that people put online were hand selected by humans to be the funniest examples. Like I think all of the examples I've shown today were definitely hand selected by humans as the funniest examples that the RNN came up with. And in some cases they might even have been edited by a human. So, uh, yeah, you do need to be a little bit skeptical when you look at these examples. [OVERLAPPING] Yep. So, uh, in the Harry Potter one, there was a opening quote and then there was a closing quote. So, like do you expect the RNN, like when it puts that opening quote and keeps putting more words, do you expect the probability of a closing quote to like increase as you're going or decrease? That's a great question. So, uh, the question was, uh, we noticed that in the Harry Potter example, there was some open quotes and some closed quotes. And it looks like the model didn't screw up, right? All of these open quotes and closed quotes, uh, are in the correct places. So, the question is, do we expect the model to put a higher probability on closing the quote given that is inside a quo- quote passage? So, I should say definitely yes and that's most- mostly the explanation for why this works. Um, there's been some really interesting work in trying to look inside the hidden states of, uh, language models to see whether it's tracking things like, are we inside an open quote or a close quote? And there has been some limited evidence to show that maybe there are certain neuron or neurons inside the hidden state, which are tracking things like, are we currently inside a quote or not? [NOISE]. Yeah. So, so, like do you think the probability would increase as you go more to the right [OVERLAPPING]? So, the question is as the quote passage goes on for longer, do you think the priority or the probability of outputting a closed quote should increase? Um, I don't know. Maybe. Um, that would be good, I suppose, because you don't want an infinite quote, uh, but I wouldn't be surprised if that didn't happen. Like I wouldn't be surprised if maybe some other worse-trained language models, just opened quotes and never closed them. Uh, any other questions? Yeah. What are the dimensions of the W metric? Okay. So, the question is what are the dimensions of the W metric? So we're going back to the online stuff. Uh, okay. You're asking me about W_h or W_e or something else? Yeah. So, W_h will be, uh, if we say that the hidden size has size n, then W_h will be n by n. And if we suppose that the embeddings have size d, then W_e will be, uh, d by n, n by d, maybe. Does that answer your question? [NOISE] Uh, any other questions about generating or anything? Yep. So, you said that there was a long sentence in the Harry Potter-related text? Yeah. Is it ever sort of practical to combine RNNs with like in this hand written rules? Sorry. Is it ever practical to combine- RNNs with a written list of hand-written rules. [OVERLAPPING] Okay. Yeah. That's a great question. So the question was, is it ever practical to combine RNNs with a list of hand-written rules? For example, don't let your sentence be longer than this many words. Um, so yeah. I'd say it probably is practical maybe especially if you're interested in, uh, making sure that certain bad things don't happen, you might apply some hacky rules like yeah forcing it to end, uh, early. I mean, okay. So there's this thing called Beam Search which we're going to learn about in a later lecture, which essentially doesn't just, um, choose one word in each step and continue. It explores many different options for words you could generate. And you can apply some kinds of rules on that where if you have lots of different things to choose from, then you can maybe get rid of some options if you don't like them because they break some of your rules. But, um, it can be difficult to do. Any other questions? Okay. Um, so we've talked about generating from language models. Uh, so unfortunately, you can't just use generation as your evaluation metric for the language models. You do need some kind of, um, measurable metric. So, the standard evaluation metric for language models is called perplexity. And, uh, perplexity is defined as the inverse probability of the corpus according to the language model. So, if you look at it you can see that that's what this formula is saying. It's saying that for every, uh, word xt, lowercase t, in the corpus, uh, we're computing the probability of that word given everything that came so far but its inverse is one over that. And then lastly, when normalizing this big, uh, product by the number of words, which is capital T. And the reason why we're doing that is because if we didn't do that, then perplexity would just get smaller and smaller as your corpus got bigger. So we need to normalize by that factor. So, you can actually show you that this, uh, perplexity is equal to the exponential of the cross-entropy loss J Theta. So if you remember, cross-entropy loss J Theta is, uh, the training objective that we're using to train the language model. And, uh, by rearranging things a little bit, you can see that perplexity is actually the exponential of the cross-entropy. And this is a good thing, uh, because if we're training the language model to, uh, minimize the cross-entropy loss, then you are training it to optimize the perplexity as well. So you should remember that the lower perplexity is better, uh, because perplexity is the inverse probability of the corpus. So, uh, if you want your language model to assign high probability to the corpus, right? Then that means you want to get low perplexity. Uh, any questions? [NOISE] Okay. Uh, so RNNs have been pretty successful in recent years in improving perplexity. So, uh, this is a results table from a recent, uh, Facebook research paper about RNN language models. And, uh, you don't have to understand all of the details of this table, but what it's telling you is that, on the, uh, top where we have n gram language model. And thessssssssssssn in the subsequent various, we have some increasingly complex and large RNNs. And you can see that the perplexity numbers are decreasing, because lower is better. So RNNs have been really great for making more effective language models in the last few years. Okay. So to zoom out a little bit, you might be thinking, uh, why should I care about Language Modelling? Why is it important? I'd say there are two main reasons why Language Modelling is important. Uh, so the first one is, that language modelling is a benchmark task that helps us measure our progress on understanding language. So, you could view language modeling as a pretty general language understanding task, right? Because predicting what word comes next to given any, any kind of, uh, generic text. Um, that's quite a difficult and general problem. And in order to be good at language modelling, you have to understand a lot of things, right? You have to understand grammar, you have to understand syntax, and you have to understand, uh, logic and reasoning. And you have to understand something about, you know, real-world knowledge. You have to understand a lot of things in order to be able to do language modelling properly. So, the reason why we care about it as a benchmark task is because if you're able to build a model, which is a better language model than the ones that came before it, then you must have made some kind of progress on at least some of those sub-components of natural language understanding. So, another more tangible reason why you might care about language modelling is that it's a sub-component of many many NLP tasks especially those which involve generating text or estimating the probability of text. So, here's a bunch of examples. Uh, one is predictive typing. That's the example that we showed at the beginning of the lecture with typing on your phone or searching on Google. Uh, this is also very useful for people who have movement disabilities, uh, because they are these systems that help people communicate using fewer movements. Uh, another example is speech recognition. So, in speech recognition you have some kind of audio recording of a person saying something and often it's kind of noisy and hard to make out what they're saying and you need to, uh, figure out what words did they say. So this an example where you have to estimate the probability of different, uh, different options of what, what it is they could have said. And in the same way, handwriting recognition, is an example where there's a lot of noise and you have to figure out what the person intended to say. Uh, spelling and grammar correction is yet another example where it's all about trying to figure out what someone meant. And that means you actually understand how likely it is that they were saying different things. Uh, an interesting, an interesting application is authorship identification. So suppose that you have a piece of text and you're trying to figure out who likely wrote it and maybe you have, uh, several different authors and you have text written by those different authors. So you could, for example, train a separate language model on each of the different authors' texts. And then, because, remember, a language model can tell you the probability of a given piece of text. Then you could ask all the different language models, um, how likely the texts and the question is, and then if a certain author's language model says that it's likely then that means that text the texts and the question is more likely to be written by that author. Um, other examples include machine translation. This is a huge, uh, application of language models, uh, because it's all about generating text. Uh, similarly, summarization is a task where we need to generate some text given some input text. Uh, dialogue as well, not all dialogue agents necessarily are RNN language models but you can build a dialogue agent that generates the text using an RNN language model. And there are more examples as well. Any questions on this? [LAUGHTER] Yep. So, I know that [inaudible] Great question. So, the question was, uh, for some of these examples, uh, such as speech recognition or maybe [NOISE] image captioning, the input is audio or image or something that is not text, right? So, you can't represent it in the way that we've talked about so far. Um, so, [NOISE] in those examples, you will have some way of representing the input, some way of encoding the audio or the image or whatever. Uh, the reason I brought it up now in terms of language models is that that's the input, but you use the language model to get the output, right? So, the language model, [NOISE] uh, generates the output in the way that we saw earlier, uh, but we're gonna learn more about those conditional language [NOISE] models later. [NOISE] Anyone else? [NOISE] Okay. [NOISE] So, uh, here's a recap. If I've lost you somewhere in this lecture, uh, or you got tired, um, now's a great time to jump back in because things are gonna get a little bit more accessible. Okay. So, here's a recap of what we've done today. Uh, a language model is a system that predicts the next word, [NOISE] and a recurrent neural network, is a new family, oh, new to us, a family of neural networks that takes sequential input of any length and it applies the same weights on every step, and it can optionally produce some kind of output on each step or some of the steps or none of the steps. [NOISE] So, don't be confused. A recurrent neural network is not [NOISE] the same thing as a language model. Uh, we've seen today that an RNN is a great way to build a language model, but actually, it turns out that you can use RNNs for, uh, a lot of other different things that are not language modeling. [NOISE] So, here's a few examples of that. [NOISE] Uh, you can use an RNN to do a tagging task. So, some examples of tagging tasks are part-of-speech tagging and named entity recognition. So, pictured here is part-of-speech tagging, and this is the task. We have some kind of input text such as, uh, the startled cat knocked over the vase, and your job is to, uh, label or tag each word with its part of speech. So, for example, cat is a noun and knocked is a verb. So, you can use an RNN to do this task in, in the way that we've pictured, which is that you, uh, feed the text into the RNN, [NOISE] and then, on each step of the RNN, you, uh, have an output, probably a distribution over what, uh, tag you think it is, and then, uh, you can tag it in that way. And then, also for named entity recognition, that's all about, um, tagging each of the words with what named entity type they are. So, you do it in the same way. [NOISE] Okay. Here's another thing you can use RNNs for, uh, you can use them for sentence classification. So, sentence classification is just a general term to mean any kind of task where you want to take sentence or other piece of text, and then, you want to classify it into one of several classes. So, an example of that is sentiment classification. Uh, sentiment classification is when you have some kind of input text such as, let's say, overall, I enjoyed the movie a lot, and then, you're trying to classify that as being positive or negative or [NOISE] neutral sentiment. So, in this example, this is positive sentiment. [NOISE] So, one way you might use an RNN to tackle this task is, uh, you might encode the text using the RNN, and then, really what you want is some kind of sentence encoding so that you can output your label for the sentence, right? And it'll be useful if you would have a single vector to represent the sentence rather than all of these separate vectors. So, how would you do this? How would you get the sentence encoding from the RNN? [NOISE] Uh, one thing you could do [NOISE] is, you could use the final hidden state as your sentence encoding. So, um, the reason why you might think this is a good idea is because, for example, in the RNN, we regard the, the final hidden state as, um, this is the thing you use to predict what's coming next, right? So, we're assuming that the final hidden state contains information about all of the text that has come so far, right? So, for that reason, you might suppose that this is a good sentence encoding, and we could use that [NOISE] to predict, you know, what, uh, what sentiment is this sentence. And it turns out that usually, a better way to do this, usually a more effective way, is to do something like maybe take an element-wise max or an element-wise mean of all these hidden states to get your sentence encoding, um, [NOISE] and, uh, this tends to work better than just using the final hidden state. [NOISE] Uh, there are some other more advanced things you can do as well. Okay. [NOISE] Another thing that you can use RNNs for is as a general purpose encoder module. Uh, so, here's an example that's question answering, but really this idea of RNNs as a general purpose encoder module is very common [NOISE] and use it in lots of different, um, deep learning [NOISE] architectures for NLP. [NOISE] So, here's an example which is question answering. Uh, so, let's suppose that the, the task is, you've got some kind of context, which, in this, uh, situation, is the Wikipedia article on Beethoven, and then, you have a question which is asking, what nationality was Beethoven? Uh, and this is actually taken from the SQuAD Challenge, which is the subject of the Default Final Project. So, um, if you choose to do- to do the Default Final Project, you're going to be building systems that solve this problem. So, what you might do is, you might use an RNN to process the question, what nationality was [NOISE] Beethoven? And then, you might use those hidden states that you get from this, uh, RNN of the question as a representation of the question. And I'm being intentionally vague here [NOISE] about what might happen next, uh, but the idea is that you have [NOISE] both the context and the question are going to be fed some way, and maybe you'll use an RNN on context as well, and you're going to have lots more neural architecture in order to get your answer, which is, uh, German. So, the point here is that the RNN is acting as an encoder for the question, that is, the hidden states that you get from running the RNN over the question, represent the question. [NOISE] Uh, so, the encoder is part of a larger neural system, [NOISE] and it's the, the hidden states themselves that you're interested in because they contain the information. So, you could have, um, taken, uh, element-wise max or mean, like we showed in the previous slide, to get a single vector for the question, but often, you don't do that. Often, you'll, uh, do something else which uses the hidden states directly. So, the general point here is that RNNs are quite powerful as a way to represent, uh, a sequence of text, uh, for further computation. Okay. Last example. So, going back to RNN language models again, [NOISE] uh, they can be used to generate text, and there are lots of different, uh, applications for this. So, for example, speech recognition, uh, you will have your input, which is the audio, and as a student asked earlier, this will be, uh, represented in some way, and then, uh, maybe you'll do a neural encoding of that, [NOISE] and then, you use your RNN language model to generate the output, which, in this case, is going to be a transcription of what the audio recording is saying. So, you will have some way of conditioning, and we're gonna talk more about how this works, uh, in a later lecture, but you have some way of conditioning your RNN language model on the input. So, you'll use that to generate your text, [NOISE] and in this case, the utterance might be something like, what's the weather, question mark. [OVERLAPPING] [NOISE] Yeah. [NOISE] In speech recognition, [inaudible]. Okay. So, the question is, in speech recognition, we often use word error rates to evaluate, but would you use perplexity to evaluate? [NOISE] Um, I don't actually know much about that. Do you know, Chris, what they use in, uh, speech recognition as an eval metric? [NOISE] [inaudible] word error rate [inaudible]. The answer is, you often use WER, uh, for eval, but you might also use perplexity. Yeah. Any other questions? [NOISE] Okay. So, um, this is an example of a conditional language model, and it's called a conditional language model because we have the language model component, but crucially, we're conditioning it on some kind of input. So, unlike the, uh, fun examples like with the Harry Potter text where we were just, uh, generating text basically unconditionally, you know, we trained it on the training data, and then, we just started [NOISE] with some kind of random seed, and then, it generates unconditionally. This is called a conditional language model because there's some kind of input that we need to condition on. Uh, machine translation is an example [NOISE] also of a conditional language model, and we're going to see that in much more detail in the lecture next week on machine translation. [NOISE] All right. Are there any more questions? You have a bit of extra time, I think. [NOISE] Yeah. I have a question about RNNs in general. [NOISE] Do people ever combine the RNN, uh, patterns of architecture, um, with other neural networks? Say, [NOISE] you have, um, you know, N previous layers that could be doing anything, and at the end of your network, you wanna run them through, uh, five recurrent layers. Do people mix and match like that, or these, uh, [inaudible]. [NOISE] Uh, the question is, do you ever combine RNN for the other types of architecture? So, I think the answer is yes. [NOISE] Uh, you might, [NOISE] you know, uh, have- you might have other types of architectures, uh, to produce the vectors that are going to be the input to RNN, or you might use the output of your RNN [NOISE] and feed that into a different type of neural network. So, yes. [NOISE] Any other questions? [NOISE] Okay. Uh, so, before we finish, uh, I have a note on terminology. Uh, when you're reading papers, you might find often this phrase vanilla RNN, and when you see the phrase vanilla RNN, that usually means, uh, the RNNs that are described in this lecture. So, the reason why those are called vanilla RNNs is because there are actually other more complex kinds of RNN flavors. So, for example, there's GRU and LSTM, and we're gonna learn about both of those next week. And another thing we're going to learn about next week [NOISE] is that you can actually get some multi-layer RNNs, which is when you stack multiple RNNs on top of each other. [NOISE] So, uh, you're gonna learn about those, but we hope that by the time you reach the end of this course, you're going to be able to read a research paper and see a phrase like stacked bidirectional LSTM with residual connections and self-attention, and you'll know exactly what that is. [NOISE] That's just an RNN with all of the toppings. [LAUGHTER] All right. Thank you. That's it for today. [NOISE] Uh, next time- [APPLAUSE] next time, we're learning about problems [NOISE] and fancy RNNs. [NOISE]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_Course_Winter_2019
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_9_Practical_Tips_for_Projects.txt
[NOISE] Okay everyone, let's get started for today. Okay. So, we're into week five of CS224n. And so, this is the plan for today. Um, in some sense a lot of this class is gonna be an easy class because I'm gonna talk about things like, um, final projects and tips for what you're meant to do, and finding a topic, and writing up your work, and things like that. Um, so for, um, so, two-thirds of the class there isn't a lot of, um, deep technical content. But I hope they're actually just some useful stuff and stuff that would be good to know about. One way you can think about this is until, until this year we had a midterm in this class. So, you know, if we weren't doing this class should instead be doing the the mid-term based on all the material that we've covered, um, so far. So, this should be really pleasant by comparison. Um, but that isn't gonna be quite the entire class. So, for this piece here in the middle I'm gonna spend a while back on some of the topics of last week. So, I wanted to have one more look at some of these gated recurrent models, um, that Abby introduced last week. And I guess my hope is that now that you've had a bit more time to look and read about things, and hopefully even have started working on homework for that. Maybe it starts to make a bit more sense or else even if it's more confusing then before, you've got some idea of what your confusions are and questions. And so, hopefully it's, um, good to think about those one more time because I think they are quite a complex notion, and it's not so obvious what they're doing and why they're doing anything useful, or whether they're just this big complex blob of mystery. And then also to touch on a couple of machine translation topics that have um, come up in the final project that we didn't really get m- time to say much about last week. [NOISE] Okay. So, let's get started. Um, so, this is our coursework in grading that we showed at the beginning. And so, the main thing I wanna do today is talk about this final project. Um, but before tha- I do that, let's just save one minute on participation. Um, so, I guess we started into one aspect of the participation policy, um, last Thursday when we took attendance, and that makes it sound draconian, but I wanted to say, um, the positive viewpoint of, um, the participation points. I mean, obviously this is a big class. There are lots of people. Um, our hope is just that people will variously, they're sort of engaged and involved in the class, and the participation points, ah, are our way of doing that. I mean, basically the way this is set up. I mean, if you do much of anything you should just get three percent for the participation points. It shouldn't be hard. I mean, I will bet you that there will be some people who at the end, will have gotten seven points in the participation category. And unfortunately we cap you, we'll only give you three percent for the participation category, but you know, providing you usually come to class, or usually write the, um, what we've got to [NOISE] the invited speakers the reaction paragraphs if you are an SCPD student. Sometimes, um, write a helpful answer on Piazza, right. You're already gonna be there on three percent. Um, yeah. And so, one, but one other thing, um, that's a way to get some parti- participation points that's out today. So, um, today we're putting up our Mid-quarter feedback survey. And we'd love to have you fill that in. I mean, we'd like to get your thoughts on the course so far. And, you know, for you guys, there are two ways that you can win. First if you give us some feedback that can help the rest of your quarter be better, but we've also got a simple bribe built into this, um, which is you get half a participation point simply for filling in, um, the, um, Mid-quarter survey, but it'd be really good to get your feedback on that. Okay. So, then the main thing I want to get to today is to talk about [NOISE] the final project. Okay. And so, I'll jump right ahead, um, into that. So, for the final project there are two choices. Um, you, you can either do our default final project, which I'll say a little bit about, it's doing SQuAD question answering, or you can propose a final, a custom final project, which we then have to approve. And in the course of that, um, if you have some outside mentor, um, you can say who they are and your project proposal, but otherwise, um, we'll attempt to assign you a mentor somewhere out of the course staff. Um, so, for all the assignments, through assignments one through five, you have to do them by yourself. Um, for the final project in either form of that, you can do it as a team. So, you can do it as one, two, or three people. And how does that work? Um, well, it works like this, um, if you're a bigger team, we do expect you to do more, and there are actually two ways you can be a bigger team that I'll point out. One way is having more people being two or three people. And the other thing that comes up is, um, sometimes people wanna do a final project for more than one class at the same time. In particular for this quarter I know there are at least a couple of people who are hoping to do, um, a joint project with Emma's reinforcement learning class. And we allow that as well. But we sort of do multiplication because if you're two people using it for two classes, that means it should be four times as great as what one person is doing for one class, right? So, how, how it works with larger teams, you know, in all honesty it's a little bit subtle because, you know, the truth is if something is just bad, um, your model was broken, um, or you, your experiment failed, um, and you don't know why. Um, you know. If, if there's just obvious ways in what you've done as bad as it's sort of, it's sort of bad whether you're one person or four person. Um, and if you've written it up beautifully, you've written up beautifully regardless of whether you're one person or four per- people, that you know nevertheless the expectation is that if you're one person will be pleased, that if you put together one model and gotten it to work well, um, but if you're three people will say, "Well, that wasn't such a big effort, um, running this one model against this task." Surely if there are three people, they could have investigated some other model classes and seeing whether they perform better or worse on this task. And we'll feel a sense of lightweight. So, we are expecting that sort of both more ambitious projects, and more thorough exploration of them if you're being a bigger team or you're using it for multiple classes. Um, for the final project, you are allowed to use any language or deep learning, um, framework that you choose to. We don't insist on what you use, though in practice in past years. Basically everyone keeps on using what they've learned, um, in the assignments. I expect that will be true, um, this time as well. [NOISE] Okay. So, um, let me just mention quickly the default final project, so that you've got, um, some sense of context. So, the materials of that will be released this Thursday. And so, for the tasks for it is, a textural question-answering task which is done over the, the Stanford Question Answering Dataset, SQuAD, which was a dataset put together, um, by Percy Liang and the department and the student . Um, so, we've used this as a default final project, um, before but we're mixing up a couple of things this year. I mean, firstly, the starter code we're providing this year is in pytorch, to fit in with what we've done to the rest of the class. But secondly, the SQuAD team, released a new version of SQuAD, SQuAD 2.0 and we're going to use that for the class this year. And the essential difference in SQuAD 2.0, is in SQuAD 1.1 or 1.0, every question had an answer in the passage of text whereas in SQuAD 2.0, a lot of questions don't have answers. So, there's this extra significant thing that you need to do which is working out, um, whether a question has an answer. So, th- this is just one example, um, which just gives you a sense of the SQuAD, what SQuAD is like. So, there's a paragraph of text. I've just put a subset of it here, um, Bill Aken, adopted by Mexican movie actress, Lupe Mayorga, um, grew up in the neighborhood town, neighboring, sorry, neighboring town of Madeira and his song chronicled the hardships faced by the migrant farm workers he saw as a child. Right, there's then a question, um, in what town did Bill, right, actually I misspelled that sorry, it should have been Aken without an I. I got confused with our former department chair, Alex Aiken, I guess when I was typing. Um, Bill Aken grow up? And the answer you are meant to give is Madeira. Um, so, just incidentally, it's a random fact. Um, so, quite a few of you know about something that was recently in the kind of tech news, tech news and we're going to talk about later in the class. Um, that people, um, from Google produced this very strong New Natural Language Understanding representation model called BERT. And which is one of several kind of models that are in a class of, models that contextually model words that have come into prominence in 2017 and 18. And in general, BERT has sort of produced very good performance for very many tasks. Indeed, if you look at the SQuAD 2.0 leader board online, um, at this URL, what you'll find is that all of the leading systems use BERT in some way or another, these days. Um, but nevertheless, this was actually a question that BERT got wrong. Um, that BERT said, "No answer to this question, " rather than getting the correct answer. Even though it looks kind of straightforward reading it as a human being. It doesn't really look a human tricky reading comprehension question. Um, so, that's the default final project. So, on Thursday, I'm going to talk more about the default final project. I'm going to talk about how people build textual question answering systems. And the details on the default final project should all be posted by then, but that's just to give you a bit of context of what the other choice is. And today, I'm sort of more going to be aiming at people, um, doing the custom final project. But let me just sort of say a bit first about the choice between the two of them. So, um, why might you want to choose the default final project? So, if you have limited experience with research, you don't have any clear idea of a research project you want to do this quarter, you're just really busy with other classes that, uh, you're enrolled in CS140 and you're just really loade- loaded [LAUGHTER] now with other classes you're doing this quarter. Um, you'd be happy to have just a clear goal towards, to work towards. A leaderboard of your fellow students that you can compete against. Um, do the default final project. Um, I think for many people it's actually the good right choice. And I mean, for what it's worth, I mean, typically, slightly over half of people have done the default final project. It's normally that, so 55 percent have done the default final project and the rest the custom final project. So, if you do the default final project, you'll get lots of guidance. You get lots of scaffolding. There are clear things to aim at in what you do. Um, the course staff are in general most prepared and most able to help you. Um, and in particular, I mean, the, for the bottom bullet here. I mean, you know, something to think about in making the choices that some of it comes down to how committed, organized, and keen are you to be wanting to do your own custom final project. If you've got a, something you really want to do for a custom final project, great. We love to see interesting custom final projects. But, you know, if you're going to end up doing something that just looks worse like [LAUGHTER] not done as well [LAUGHTER] as you would've done a, done a project. If you'd just done the fin-, default final project, then you should probably choose the default final project [LAUGHTER]. Um, okay. But even if you are doing, think you'll do the default final project. I hope that some of this lecture will still, um, be useful. While the part in the middle, when I talk back about MT and Gater or current networks are definitely useful. But, you know, beyond that, um, some of the tips on doing research and discussions of, sort of looking at how to make neural networks work and error analysis, paper writing. These are all good topics that apply to the default final project as well. So, in the other direction, um, if you have some research project that you're excited about. Possibly, it's one you are already working on or possibly, that you've just always wished to do. Something exciting with neural networks and rap music. Um, well, you know, that custom final project is an opportunity to do that. Um, so, it's a chance for you to do something on your own. Um, it, you know, obviously, if you're not interested in textural question-answering but do you think you might like machine translation. Well, it's an opportunity, um, to choose any topic of your own. It's also a way to sort of experience much more of the research pro- process because, you know, for the default final project, it's a bigger, more open-ended thing than any of our assignments. But, you know, nevertheless, the default final project is still sort of a pre-setup thing that you don't have to find your own problem, find your own data, work out a good approach to it. A lot of that's sort of been done for you. So, that, for a custom final project it's much more your own job to sort of define and execute a mini research project. And so, if all of that stuff seems appealing or some of it seems appealing, um, then aim at the custom final project. Um, doing this just reminded me about a fact about assignments one to five. You know, for assignments one to five, we are hoping that they can be a set of stepping stones for learning how to build deep learning systems. But, you know, one of our goals in that is to give you less hand holds as time goes by. So, you know, assignment one was really easy and assignment three, we tried to make it really handholdy, so people could start to learn PyTorch. But, you know, we're actually hoping for assignments four and five that they're actually harder, so that you're getting more experience of working out how to build and do things by yourself because if the only thing you ever see is completely scaffolded assignments. It's sort of like when you do CS106A that you have to do a great job on the CS106A assignments but you don't really know how to write a program by yourselves. And that's sort of what we want to, um, sort of get you beyond, um, in the latter two assignments. So, I hope you have started on assignment four. If not, you really should start and get underway soon as Abby was emphasizing. Okay. So, this year for the, um, final project, whichever one you're doing. Um, we're actually putting more structure in than we have in previous years to encourage people to get going. And so, in particular, there are early on components which are worth points in the grading. So, the first part of that is a project proposal, um, which is, um, we want from each team. So, one per team, um, you can just do a joint one, um, which is worth five percent. Um, so, it's, we're releasing the details on Thursday which is when assignment four is due and it'll be due the following Thursday. So, we're actually having an interruption in the sequence of current assignments, right. So, for the next week, um, what the thing to do is project proposal. And then the week after that, um, we're back to assignment five and then we go full time into final project. So, what we're wanting for the project proposal is we're actually wanting you to do a little bit of starting off research and the fine ter- terms of reading some paper. So, find some paper that's, um, relevant to your research, um, that you are going to do. Um, read it, write a summary of what it does. Um, write down some thoughts on how you could adapt or extend ideas in it, in your own final project. Um, and then say something about what your plan is for what you're goi- hoping to do for your final project. And especially, if you're doing a custom final project there's more to write there because we'll want to make sure that you have some idea as to what data you can use and how are you going to evaluate it. Whereas a couple of those things are actually sort of determined for you if you're doing the default final project. Um, and so then after that we're going to have a project milestone, um, which is the progress report where we're hoping that you can report that you're well along in your final project. That you've run at least some experiment and have some results on some data that you can talk about. So the default- the project milestone is due on, um, Thursday, March seven. So it's actually more than halfway through the period that's sort of dedicated to the final project. So, if you are not- we sort of put it past halfway because the fact of the matter is it always takes people time to get going, um, but nevertheless, you know, what you should have in your head is unless you're halfway through by the time you're handing in your, um, project milestone, then you're definitely behind. And you'll be doing that typical Stanford thing of having a lot of late nights and lack of sleep in the last week [LAUGHTER] of class trying to catch up for that. Um, okay. So, um, so now I've sort of, um, want to sort of just start saying a bit of- for custom final projects of some of the sort of thinking and types of things that you could do about that. Um, so you have to determine some project, um, for- if you're doing a custom final project. So, in philosophy of science, you know, there are basically two ways for any field you can have a project. You either start with some domain problem of interest. You're [NOISE] just got something you're interested in or say, "Gee, I'd like to do better machine translation." And then you work out some ways to address it with technology, or you start with some, um, technical approach of interest. And you say, "Oh well, those LSTMs seemed kind of neat, but I didn't understand why there's that extra 10H and I think it'd be better if it changed in this other way. And you start exploring from a technical direction to try and come up with a better idea. And then you're wanting to prove that it works. So in kinds of the projects that people do for this class, this isn't quite an exhaustive list, but this is sort of in general what people do. So, the first category and really I think this is the bulk of projects over half is people find some task replication of interest and they build some neural network models to try and do it as effectively as possible. Um, there's a second category where people sort of concentrate on implementing, so re-implementing some complex neural architecture and getting it to work on some data. And so let me just say a couple of sentences on this. Um, so, it's certainly okay for you to, um, start by re-implementing some existing model. Um, and some people that's as far as they get. And then the question is, um, is that okay? And the answer to whether that's okay sort of largely depends on how complex your neural model is. Um, so if what you think is okay I'm going to, um, re-implement something like we've seen already, like a window-based classification model and you just re-implement that and run it on some data and get some results and stop. That's definitely a bad project. Um, but there are lots of very complicated and sophisticated neural, um, architectures out there. And if you're trying to do something complicated well then that can be a fine project. Um, so, I actually sort of stuck in a few examples of projects. So, I mean, here's one that was actually from a couple of years ago. Um, so this was in the 2017 class. And so, shortly before the 2017 class, "Deep Mind" who's one of the um, organizations producing the most complicated neural models had just released a paper about the differentiable neural computer model, which was a model of how to have something like a differentiate- differentiable Turing machine-like architecture inside a neural network, um, and thought, um, this would be a great challenge to try and, um, re-implement the differentiable neural computer which Deep Mind hadn't released any source code for because they're not the kind of place that generally releases their source code. Um, and, you know, this was actually an extremely ambitious project because it was, it's a very complex architecture which is hard to get to train. And so, you know, at the end, at the end she hadn't been able to sort of train as big a model or get as good results as they report in the paper that, you know, frankly we thought it was pretty miraculous that she managed to get it working at all. In the period of time we had in the class and she did successfully do an open-source re-implementation of this model which basically worked the same as in their paper. Though not quite as well. So, you know, that seemed a huge achievement. So, you certainly can do something of that sort. Right. So, um, so you- you can sort of from a technical direction have some ideas for variant model and explore, um, how to make a different kind of model class and then look at how it works on some problem that works well. Another kind of project you can do is an analysis project, so that you might be interested in something in natural language or something on the behavior of neural networks, and just think that you want to analyze them more closely. So, you might think, "Oh, maybe these neural machine translation systems work great providing the word order is the same in the source and target language, but can they really do a good job of reordering phrases for different language types? How much does their performance vary based on the amount of reordering between the source and target language?" And you could do some experiments to try and investigate that as an analysis problem that looks at a model, and we sometimes get projects like that. Down at the bottom is the rarest kind of project, which is when some people try to do something theoretical which is to prove some properties of a system. So if- this is easiest to do in simple systems for something like word vectors, that if you might want to prove something about the kind of spaces that are induced by word vectors, and what properties you need to have in models for word analogies to work or something like that. Um here are just another couple of examples that so- shows some of the other classes. So, this one is an example of find a problem and build some models. So, these three people um, looked at Shakespearean Sonnet generation and then they considered several different models for Shakespearean Sonnet generation and got the best results from this sort of- you'd probably can't really see all the details, but they have a sort of a mixture of word level and character level gated model that feeds into a word level LSTM and produces sonnets and the output wasn't totally bad. "Thy youth's time and face his form shall cover. Now all fresh beauty my love there. Will ever time to greet forget each like ever decease, but in a- in a best at worship his glory die." Okay. It's maybe not perfect, [LAUGHTER] but it sort of sounds like a Shakespearean sonnet. Um, okay. Yeah. So, I showed you that one already. Um, here's, um, an example of someone who designed a different kind of network, and this was a project that came out of this class that was then continued with, and the- they got a conference paper out of it, the ICLR 2017 paper. So, this was looking at doing a better job at building a neural language model. And essentially, they had two ideas, both of which seem useful for building better neural language models. And so, one is that in the stuff that we've presented so far, whether it was the early word vectors, or what Abby presented last week in the neural language model, there are effectively two vectors for each word: there's one for the word encoding on the input and then when you have the softmax on the other side effectively, the rows of that matrix that go into the softmax are also word vectors for determining how likely you are to produce different words. And so, um, these two people had the idea that maybe if we actually in the model tied those two word ve- vectors together that would help and produce a better model and, um, and so this was actually done several years ago when that was a novel idea which hadn't actually been done. So, this was done in the 2016 class, and then they had this second idea which was, well maybe doing the kind of, cross entropy one, zero, sort of you look at the correct word that you are meant to produce and sort of work out a loss based on that. Maybe that's not very good because you don't get partial points if you produce a different word that's semantically similar. And so, that they had this idea that they could use word vector similarity and then you'd be giving a score for any word that was produced next based on how similar it was according to word vector similarity to the word that you are meant to produce next and that was also a useful idea that they're able to produce improved language models with. So, that was a cool project. Um, here's an example of, um, somebody from last year, um, who did an analysis project. So, their idea was, um, that they- well, they were going to, um, evaluate on some task, they actually did several tasks, um, word similarity, analogy, and the SQuAD, um, question answering system. But the question was, okay, a lot of neural network models are big and so aren't very suitable for phones, um, could we get away with compressing the models a lot so that rather than having doubles, or 32-bit floats, or even 16-bit floats, that are now used quite a bit in neural networks, could we, um, compress a lot more and quantize, um, numeric values so that we can only be, say, using two bits fo- per parameter so they'll literally need four bits per parameter? And if you do that naively, it doesn't work. But if you explore some cleverer ways of doing it and see how to make things work, you can actually get it to work, um, really well. Um, in fact, it actually seems like sometimes you can improve your performance doing this because the quantization acts as a form of regularizer. Um, you can find lots of other projects, um, online, if you look at the CS224n pages and you should. Um, okay. So, if you want to do a final project you have to find someplace to start. You know, one place is to start looking at papers there's online anthology of most of the NLP conference papers. You can look at M- ML conferences have lots of relevant papers as well. You can look at past CS224n papers that cover lots of topics. Um, though, you know, I- I sugge- don't also forget, um, the advice down the bottom, um, which is look for an interesting problem in the world. Um, so, our Stanford's CS emeritus professor Ed Feigenbaum likes to quote the advice of his, um, advisor, Herb Simon, um, of "If you see a research area where many people are working, go somewhere else." Um, well, you know, in the context of this class don't go so far away that you're not using neural networks or NLP because that won't work for project for this class. But, you know, nevertheless, I mean, in some sense it's a bad strategy of saying let's look at all the papers that were published last year, and let's wo- start working on one of their problems, or lots of people are working on question-answering, I'll do it too. You know, there are lots of interesting different problems in the world and if you know of some, you know, cool website that somehow does something interesting related to language, you know, maybe you can make a final project out of that. Um, other ways to find final projects. Um, so the person who's first put together most of the CS231n content was And- Andrej Karpathy, um, who now works at Tesla and among his other- things he did for the world he put together this site Arxiv Sanity Preserver, um, which is a way to find online archive papers which is a major pre-print server and if you say a few papers you're interested in, it'll show you other papers that you're interested in. It'll show you papers that are currently trending. So, that can be a good way to look. Um, if you think it'd be just good to be in some competition where you're wanting to build a system that's better than other people's, um, you can look at leaderboards for various tasks. So, there's this brand new site which is pretty good though not completely error free and correct, of paperswithcode.com, and it collects a whole lot of leaderboards for a whole lot of machine learning tasks including tons of language ones. So, it gives leaderboards for question answering, machine translation, named entity recognition, language modeling, part of speech tagging. All sorts of tasks you can find there, and find out what the current states of the art and datasets are. Okay. Um, so, you know, different projects are different, but often for a lot of projects the things you need to be making sure of is that something that you can get a decent amount of data about so you can train a model. It's a feasible task, it's not so enormous you can't possibly do it in four weeks. Um, you'll want to have some evaluation metric and normally for deep learning you have to have- even if you hope to do some human evaluation, as well, you have to have some automatic evaluation metric. Because unless there's just some code that you can run that gives you a score for how well you're doing, then unless you have that, you just sort of can't do the deep learning trick of saying, "Okay, let's, um, do backpropagation to optimize our scores according to this metric." And pretty much you'll want to do that to be able to do neural network optimization. Um, and we do require that there is an important part of NLP in your class project. I mean, it doesn't have to be only thing, you can be doing reinforcement learning as well, or you could do images to caption, say you're doing joint vision and NLP, but there has to be NLP in it. Okay. Ah, last bit before I get back onto the content from last week. Ah, so, something that you'll need to do is have data for your project. Um, so some people collect their own data for a project and, you know, it's not impossible to collect your own data especially if there's something you can do with unsupervised data. You might be able to get it by just sort of crawling an interesting website. You can annotate a small amount of data yourself. If you have any site that has some kind of, you know, ratings annotation stars on it, you can treat those as a form of, ah, annotation. Right? So, if you want to predict something like, um, you know, which descriptions on product review websites or which reviews on product review websites do people like? Well, they get star ratings at the bottom from people and then you can try and fit to that as your supervision. Um, sometimes people have data from an existing project for a company. You can use that. But nevertheless for most people, um, given that classes are short and things like that, the practical thing to do is use an existing curated dataset that's been built by previous researchers. That normally gives you a fast start and lets you get to work building models, um, there's obvious prior work, there are baselines and previous systems that you can compare your performance on, et cetera. Okay. Um, so, where can you find data? I'll just mention a couple of places here and there are lots more. So, traditionally the biggest source of linguistic data used by academics was this place called the Linguistic Data Consortium and they have lots of datasets for treebanks and named entities and coreference, parallel machine, translation data, et cetera, et cetera. And so, um, the Linguistic Data Consortium licenses their data, Stanford pays that license so you can use any of it. Um, but if you want to use it, um, you go to that, um, linguistics.stanford.edu page. And there's a sign-up, um, ah, piece on how to sign up where you basically, um, say, "I will use this data only for good Stanford purposes and not as the basis of my startup." And, um, then you can have access to that data and it can be made available by NFS or otherwise. Um, but as time has gone by, there's a ton of curated NLP data that's available on various websites. In fact, if anything the problem is it's just sort of spread over the web and that's sort of hard to find different things. But there are some, some sites that have a lot of data for various purposes. So, anything related to machine translation or just parallel, um, data across different languages. The statistical MT statmt.org site has a great amount of data and that organization runs shared tasks every year, the Workshop on Machine Translation, WMT which Abby already mentioned in her class. And they've got datasets that we use for those tasks and then there are leaderboards for those tasks. And you can find data for that. Um, if you thought dependency parsing was cool, um, there's the Universal Dependencies site which has parallel, not parallel site, which has treebanks in the same annotation scheme for about 60 different languages and you can work on parsers for different languages and things like that. Um, I'm not gonna bore you with going through all of them but, you know, there are just tons and tons of other datasets that Facebook has released datasets, Google's released datasets, I said Stanford have released several other datasets including the Stanford Sentiment Treebank and the Stanford Na- Natural Language, um, Inference corpus, uh, new question-answering datasets and including HotPotQA and conversational question answering. Other groups at different universities have released datasets. There are just tons of them. You can find data on sites like Kaggle where it has machine-learning competitions. There are sites with lists of datasets. You can look at research papers and see what datasets they used. And of course, you can ask the course staff or on Piazza to try and find suitable datasets for a project. Okay. Um, so that's a fair bit about the projects that I've got a bit more to say later about doing projects. Does anyone have any questions up until now on projects? Okay. Um, well, so now we're gonna sort of, um, flip a switch in our brains and go back and have one more look, um, at gated recurrent units, um, and what happens and what they mean. Um, and, you know, this is sort of, sort of the same material that Abby presented, presented a little bit differently but, you know, I hope it might just sort of give one more way of sort of thinking a bit about what's happening about these gated recurrent units and why they might be doing something useful and what are the alternatives to them. So, if you remember the problem we started with is that we wanted to understand sort of derivatives backward in time. And so, the idea of that is well, if we twiddle this a little bit at time T, how much effect is that going to have so we make some adjustment here. How much effect is that going to have n time steps later? Um, and well, we sort of looked at the derivatives and we sort of saw we got these, um, terms for each successive time step. And so as Abby discussed the problem is that for the derivatives that we got, we kind of got this matrix form for each time step. And so that if we're going through a lot of time steps, we got a lot of matrix multiplies and as the result of those matrix multiplies, pretty much either things disappeared down to zero or exploded upward depending on what was in the matrix. And so that- and so that's sort of means we, When the gradient goes to zero, we kind of can't know what's happening there. Whether there isn't any conditioning or just we can't measure it. And so that's sort of made people think that maybe this naive, um, recurrent neural network transition function just isn't a good one to use. And that sort of leads into these ideas of gated recurrent units. Right? Because if we have the simple recurrent neural network where we're sort of feeding forward for each step in time. Well, what happens is when we backpropagate. We have to backpropagate through every intermediate node and that's where we sort of have our gradients disappear. And so an idea of how you could fix that is to say well, suppose we just put in direct connections that were longer distance, um, then we'd also get direct backpropagation signal and so then we wouldn't have this same problem of vanishing gradients. And effectively, we've sort of looked at two ways in which you can achieve that effect. Because one way of you can achieve that effect which Abby looked at in the end part of the last lecture was this idea of attention. So, when you've got attention, you're actually are creating these shortcut connections, oops, they're the blue ones, um, from every time step and using it to calculate an attention distribution. But the way the attention was done that we looked at, it was sort of mushing together all previous time steps into some kind of an average. But the idea of the gated recurrent units is in some sense we want to achieve this same kind of ability to have shortcut connections. But we want to do it in a more controlled and adaptive fashion where we still do remember the position of things. So, how can we create an adaptive shortcut connection? And so that's, um, what we start to do with the gates that are put into a gated recurrent network. So, if- so first off we sort of say let's have a candidate update which is exactly the same as the one that's used in a simple recurrent neural network. But what we can do is add a gate. And so, the gate will calculate a value from zero to one. And so what we're going to do here is mix together using our candidate update which is just like a simple recurrent neural network which will be then mixed together with simply directly carrying forward the hidden state from the previous time step. So, once we're doing that we are sort of then adaptively- we're adaptively partly using a computation from one time step back, um, done as a recurrent neural network. And we're partly just inheriting the, we're just part- sorry, we're partly inheriting the hidden state from the previous time step. So, it's sort of like a shortcut connection but we're waiting as to how much we're short cutting and how much we're doing our computation. And we control that adaptive choice by using a calculation to set the gate. And we do that with a sigmoid, um, computed over the import and the hidden- previous hidden state and using it again, an equation kind of like a simple recurrent neural network. Okay. Um, but, you know, if you wanted to go a bit further than that, um, you could think well, maybe sometimes we sort of might actually just want to get rid of the stuff that was in the past. That maybe the stuff in the past sometimes becomes irrelevant, like, maybe sometimes we start a new sentence or a new thought and we just want to get rid of the stuff that's in the past. And so, that can lead into this idea of having a second gate, a reset gate and so the reset gate calculates a value from 0 to 1, um, just like the other gates, and then we're doing this element wise dot-product between the reset gate and the previous hidden state and that's then sort of saying well, maybe we want to keep some parts of what was stored previously and some parts that we now want to throw away. And so we put that into the model as a second gate. Um, and so an interesting way to think about that is to sort of think about this as if this recurrent neural network is like a little tiny computer as the kind of little tiny computers you might do in a sort of simple architecture class and if you think about it that way, um, for the basic simple recurrent neural network the way the tiny computer works is that you've got a bank of registers h, your hidden state, and at each time step you have to read- whoops, at each time step you have to read the entirety of your bank of registers, you do some computation and then you write the entirety of your bank of registers and, you know, if in terms of thinking about computer architecture, that sounds like a pretty bad way to implement a simple computer. Um, so precisely what a gated recurrent unit is doing is saying, "Well, maybe we can have a slightly more sophisticated little baby computer." Instead of that, we could select a subset of the registers that we want to read. And so, the reset gate can control that because it can say, "We'll just ignore a bunch of the other registers." Um, it then will compute a new value based on just these, um, stored registers and then the update gate which is also adaptive can say, "Well, I want you to write some registers but the rest of the registers will just keep their previous value." That seems a useful idea to have in a computer. And so, that's what we're doing here. And so, this model here is, um, what was- Abby presented second as the gated recurrent unit. So, this is sort of a much more realistic model and it sort of in some sense overlaps with the ideas of attention. Okay. Um, so gated recurrent units are actually a quite new model. Um, the model that was done way earlier and has had huge impact is these LSTM long short-term memory units and they are a bit more complex. Um, but, you know, a lot of it is sort of the same, right? So, the hidden state of a gated recurrent unit is kind of equivalent to the cell of the LSTM. So, both of them are using the same idea of summing together, a mixture of just directly interpret- directly inheriting what you had from the previous time step together with, um, something that you've calculated for the current time step and the way you count- calculate it for the current time step is exactly the same in both cases. Whoops, sorry. Both cases again that you're calculating the current update using this sort of simple RNN equation. So, those parts are exactly the same. Um, but the LSTM is a little bit more complicated. It now has three gates, um, and it's got this extra, um, hidden state that's then worked out with a bit more complexity. So, in terms of my LSTM picture, you know, the LSTM picture looks as if you sort of pull apart all of its math pretty complex but so there are three gates so that you can forget or ignore everything. So, you can forget or ignore the input, you can forget or ignore parts of your previous hidden state and you can forget or ignore parts of the cell when calculating the output and each of these is produce- when I say forget or ignore parts of, what that's meaning is you're calculating a vector which is then going to be element-wise multiplied by the import of the previous hidden state or the cell. And so, that's why you have this effective now an addressable bank of registers where you can use some of them but not others of them. Okay. So, the bottom part of the LSTM is just like a simpler simple recurrent neural network, um, which then calculates, um, a candidate update. And so, for both of the GRU and the LSTM the real secret is that rather than just keeping on multiplying stuff what you do is you add two things together. Um, and so this adding is why you don't get the same vanishing gradient evil effects because you're calculating a new candidate update and you're adding it to stuff that was previously in the cell and that gives you a simple gradient when you backpropagate that- that you have direct linear connection between the cell at time t and the cell at time t minus one. And so, really that simple addition there is sort of the secret of most of the power of LSTMs and this same idea of adding two things together has also been a secret of many of the other advances in deep learning recently. So, envision in the last couple of years the sort of standard model that everybody uses as ResNets, residual networks and they use exactly the same secret of allowing these adaptive updates where you add together a current layer's value with directly inheriting a value from the layer below. Um, other things that use similar ideas are things like highway networks and so on. So, that's proven to be an extremely powerful idea. Um, the LSTM is slightly different from the GRU because when we look back at its equations that the- the GRU kind of does a linear mixture where you have one gate value, UT, and one minus UT, where the LSTM adds values controlled by two different gates, a forget gate, and an input gate. Theoretically, having the adding of two separate gates rather than than a mixture is theoretically more powerful. Um, depending on the application, sometimes it doesn't seem to make much difference, um, but there's definitely a theoretical advantage to the LSTM there. Okay. Um, just, I hope that's maybe a little bit more helpful to have seen those again, um, any questions on gated recurrent units? Still look confusing? I think it's useful to have some kind of idea as to why the people come up with these things and why do they make sense but, you know, nevertheless, the reality is in the sort of era of 2015 plus any deep learning package you use whether it's PyTorch, TensorFlow, MXNet whatever, you know, it just comes with LSTM and GRUs and you don't have to program your own. In fact, you're at disadvantage if you program your own because if you are using the built-in one, it's using an efficient CUDA kernel from Nvidia whereas your custom built one won't and/or run three times slower. Um, so, you know, essentially don't have to know how to do it, you can just take the attitude that an LSTM is just like a fancy recurrent network which will be easier to train and that's true. Um, but you know, these kind of architectural ideas have actually been central to most of the big advances that have come in deep learning in the last couple of years, so there's actually good to have an ID, to have some sense of what were these important ideas that made everything so much better because they had the same kind of component building blocks you might also want to use in custom models that you design for yourself. Okay, two bits of machine translation. Um, so a bit of machine translation that we sort of didn't cover next week but lots of people have been seeing and getting confused by in the assignments so I thought I'd explain a bit about is UNKs and explain where do UNKs come from and why are there UNKs and the reason why there are UNKs is effectively kind of for efficiency reasons. So, if you sort of think about producing output in a neural machine translation system and really this is the same as producing output in any natural, neural natural language generation system, so that's really the same for neural language model, um, that if you have a very large output vocabulary is just a expensive operation. So you have a big matrix of softmax parameters where you have a row for every word, um, and then you have what, [NOISE] then we have an animation that is not working for me. Oh, all right there, there we go. Um, so then we have some hidden state that we've calculated in our recurrent neural network. And so, what we gonna do is sort of multiply, um, that vector by every row of the matrix, put it through a softmax and then get probabilities without putting every word. Um, and you know, this seems pretty simple but the problem is that to the extent that you have a humongous vocabulary here, you just have to do a humongous number of rows of this multiplication and it actually turns out that doing this is the expensive part of having a neural machine translation or neural language model system, right? The LSTM might look complicated and hard to understand, but you know, it's relatively small vectors that you multiply or dot-product once, and it's not that much work whereas if you have a huge number of words, this is a huge amount of work. So, just for instance sort of for the pion- pioneering sequence to sequence, um, neural machine translation system that Google first did, they ran it on an eight GPU machine because they have lots of GPUs but the way they set it up to maximize performance was of those eight GPUs, three of them were running a deep multi-layer neural sequence model and the other five GPUs, the only thing that they were doing was calculating softmaxes because that's actually the bulk of the computation that you need to be able to do. Um, so the simplest way to make this, um, computation not completely excessive is to say, "Hey, I'll just limit the vocabulary." Yeah I know that you can make a million different words in English and if you look at Spanish inflections of verbs, there are a lot of them and there's gonna be huge number of words, um, but maybe I can just make do with a modest vocabulary and it'll be near enough. Surely 50,000 common words, I can cover a lot of stuff and so, that was sort of the starting off point of neural machine translation that you, people use the modest vocabulary like around 50,000 words. And well, if you do that, um, well, then what happens is you have UNKs. So UNK means, this is an unknown word, that's not in my vocabulary and so there are two kinds of UNKs, they can be UNKs in the source language and you know, they're sort of optional because, you know, it's not actually a problem having a large source language vocabulary, but the fact of the matter is if you've sort of trained a model on a certain amount of data, there are some words you aren't going to have seen, so you are going to have words that you just didn't see in your training data and you won't have any pre-trained or trained word vector for them and you can deal with that by either just treating them as UNK, so giving them a new word vector when you encounter them. But the tricky part is on the translation that you're wanting to produce these rare words but they're not in your output vocabulary, so your system is producing UNK, UNK to UNK, which is not a very good translation really. Um, yeah, and so that was sort of what the first, um, machine, neural machine translation systems, um, did. And so, you know, obviously that's not a very satisfactory state of affairs and so there's been a whole bunch of work, um, as to how to deal with this, so you can use methods that allow you to deal with a larger output vocabulary, um, without the computation being excessive. So one method of doing that is to have what's called a hierarchical softmax, so that rather than just having a huge matrix of words, you sort of have a tree structure in your vocabulary so you can do calculations with hierarchical, um, multiple small softmaxes and you can do that more quickly. Um, I'm not gonna go through all these exam, all these things in detail now, I'm just sort of very quickly mentioning them and if anyone's interested, they can look. People have used the noise-contrastive estimation idea that we saw with Word2vec in this context as well. So this is a way to get much faster training which is important, it's not really a way to solve, um, speed at translation time but, you know, if this means you can train your system in six hours instead of six days that's a big win and so that's a good technique to use. Um, people have done much smarter things, so really, um, the large vocabulary problem is basically solved now and so the kind of things that you can do is you can produce subsets of your vocabulary and train on particular subsets of vocabulary at a time and then when you're testing, you adaptively choose kind of a likely list of words that might appear in the translation of particular sentences or passages and then you can effectively work with sort of an appropriate subset of a vocabulary and that's sort of an efficient technique by which you can deal with an unlimited vocabulary but only be using a moderate sized softmax for any particular paragraph that you're translating, there's a paper that talks about that method. Um, another idea is you can use attention when you do translation, the idea talked about at the end of last time. So if you have attention, that sort of means that you can, you're pointing somewhere in the source and you know what you're translating at any point in time. So, if that word is a rare word that's not in your vocabulary, there are things that you could do to deal with that. I mean, firstly, if it's a rare word, its translation is much more likely to be constant, so you might just look it up in a dictionary or word list, um, and, um, stick in its translation, sometimes it's appropriate to do other things. I mean, turns out that, you know, quite a lot of things that unknown words turn out to be other things like, you know, hexadecimal numbers, or FedEx tracking IDs, or GitHub shards, or things like that. So for a lot of things like that, the right thing to do is just to copy them across. And so, another thing that people have looked at is copying models, um, in machine translation. Okay, um, there are more ideas that you can, we can get into to solve this and actually, um, next week we're gonna start dealing with some of the other ways that you could solve this, um, but I hope there to have given you sort of a sense of, um, sort of what these UNKs are about, why you see them and, uh, that there are sort of some ways that you might deal with them but you're not expected to be doing that, um, for assignment four. Okay, then I just wanted to give a teeny bit more on evaluation. Um, so Abby said a little bit about evaluation with blue and that then comes up in the assignment, so I just thought I'd give you a little bit more context on that since they're being quite a few questions about it. So, um, so the general context here is, you know, how do you evaluate machine translation quality and sort of to this day, if you wanted to do a first rate bang up evaluation of machine translation quality, the way you do it is you get human beings to assess quality, you take translations and you send them to human beings with good bilingual skills and get them to score things. And there are two ways that are commonly used. One is sort of rating on Likert scales for things like adequacy and fluency of translations, um, but another way that often works better is asking for comparative judgments. So here are two translations of this sentence which is better, um. And so that's, you know, sort of still our gold standard of translation. Um, another way you can evaluate translation is use your translations in the downstream task. So, you could say "I'm gonna build a cross-lingual question answering system and inside that system I'm, gonna use machine translation. I'm gonna translate the questions um, and then try and match them against the documents. Um, and then my score will be how good my question answering system is, and so the machine translation system is better if my question-answering score um, goes up." I mean, that's kind of a nice way to do things because you're kinda then taking them in, run around needing, needing human beings, and yet you do have a clear numerical measure that's coming out the back end. But it sort of has some catches because, you know, often there will be a fairly indirect connection between your end task and the quality of the machine translation, and it might turn out that there certain aspects of the machine translation like whether you get agreement endings, right on nouns and verbs or something. They are actually just irrelevant to your performance in the task and say you're not assessing all aspects of um, quality. Um, and so then the third way to do it is to come up with some way to score the direct tasks. So, here, um, the direct task is machine translation, and this has been a valuable tool. For, you know, really the last so 25 years when people are doing machine learning models, because as soon as you have an automatic way to score things, you can then run automated experiments to say "Let me try out these 50 different options. Let me start varying these hyper-parameters and work out which way to do things is best." And that importance has only grown in the deep learning era, when all the time what we want you to do is as Abby discussed, um, build end-to-end systems and then back propagate throughout the entire system to improve them, and we're doing that based on having some objective measure which is our automatic metric. And so, that led into the development of automatic metrics to try and assess machine translation quality, and the most famous and still most used one is this one called BLEU. And so, as Abby briefly mentioned, we have a reference translation done by human beings. At some time a human being has to translate each piece of source material once, but then you take a machine translation and you score it based on the extent to which there are one or more word sequences that appear in the reference translation and also appear in the machine translation. And so you are working out n-gram preci-precision scores for different values of n. So, the standard way of doing it is you do it for one grams, bigrams, trigrams, and four-grams. So, word sequences of size one to four, and you try and find for ones of those in the machine translation, whether they also appear in the reference translation, and there are two tricks at work here. Um, one trick is you have to do a kind of a bipartite matching um, because it just can't be that um, there's a word um, in the, in the reference translation somewhere. Um, [NOISE] I don't know if there's. I've got a good example here [NOISE]. Um, maybe I can only do a silly example, but I'll do a silly example. Um, that it's- it doesn't seem like you wanna say "Okay. Because there's a "the" in the reference, that means that this "the" is right and this "the" is right, and this "the" is right and every other "the" is also right." That sort of seems unfair. So, you're only allowed to use each thing in the reference once in matching n-grams, but you are allowed to use it multiple times for different order n-grams. So, you can use it both in the uh unigram, bigram, trigram and 4-gram. The other idea is that although you're measuring the precision of n-grams that are in the machine translation, you wouldn't want people to be able to cheat by putting almost nothing into the machine translation. So, you might wanna game it by no matter what the source document is. If the target language is English, you could just um say, "My translation is the, because I'm pretty sure that will be in the reference translation somewhere and I'll get 0.3 unigram, and that's not great but I'll get something for that and I am done." And so you wouldn't want that and so, you're then being penalized by something called the brevity penalty if your translation is shorter than the reference translation, and so this BLEU metric is um forming a geometric average of n-gram precision up to some n. Normally, it's sort of up to four, is how it's done. Where it's a weighted geometric average, where you're putting weights on the different n-grams. Um, for the assignment, we're only using unigrams and bigrams. So, you could say that means we're putting a weight of zero on um, the trigrams and 4-grams. Okay. Um, and so that's basically what we're doing. I-I've just mentioned um couple of other things. You might think that this is kind of random, and so people have um, used this idea of rather than just having one reference translation, we could have multiple reference translations, because that way we can allow for there being variation and good ways of translating things, because in language there's always lots of good ways that you can translate one sentence. Um, people have done that quite a bit, but people have also decided that even if you have one translation, provided it's independent and on a kind of statistical basis, you're still more likely to match it if your translation is a good translation. So, it's probably okay. Um, so when BLEU was originally um, introduced, BLEU seemed marvelous and people drew graphs like this showing how closely BLEU scores correlated um, with human judgments of translation quality. However, um, like a lot of things in life, there are a lot of things that are great measures, providing people aren't directly trying to optimize it, and so what's happened since then um, is that everybody has been trying to optimize BLEU scores, and the result of that is that BLEU scores have gone up massively but the correlation between BLEU scores and human judgments of translation in quality have gone down massively, and so we're in this current state that um, the BLEU scores, the machines, um are pretty near the scores of human translations. So, you know, according to BLEU scores, we're producing almost human quality machine translation, but if you actually look at the real quality of the translations, they're still well behind human beings um and because you could say the metric is being gamed. Okay. I'll hope those things help for giving more sense um for assignment four. Um, so now for the last um, about 12 minutes, um, I just now wanna um, return to um final projects and say a little bit more um about final projects. Um so, there many, many different ways you can do final projects, but just to sort of go through the steps. I mean, you know, for a simple straightforward project, this is kind of the steps that you want to go through. So, you choose some tasks, summarizing text um, producing a shorter version of a text. You work out some dataset that you can use. So, this is an example of the kind of tasks that there are academic data sets for that other people have used, and so you could just use one of those, and that's it, you're already done or you could think "Oh no! I'm much too creative for that. I'm gonna come up with my own dataset [NOISE] um and get some online source and do it." Um, and you know, summaries of the kind of things you can find online and produce your own dataset. Um [NOISE] I wanna say a bit in, in just after this, about separating off um data sets for training and test data, so I'll delay that, but that's important. Then, you want to work out a way to evaluate your um, system including an automatic evaluation. Um, normally, for summarization, people use a slightly different metric called ROUGE but it's sort of related to BLEU hence its name. Um, it's the same story that it sort of works, but human evaluation is much better. Um, but you need- so you need to work out some metrics you can use for the project. Um, the next thing you should do is establish a baseline. So, if it's a well-worked on problem there might already be one, but it's not bad to try and calculate one for yourself anyway, and in particular what you should first have is a very simple model and see how well it works. So, for human language material, often doing things like bag of words models, whether they're just a simple classifier over words or a new bag of words, averaging word vectors. It's just useful to try that on the task and see how it works, see what kinds of things it already gets right, what kind of things it gets wrong. You know, one possibility is you will find that a very simple model already does great on your task. If that's the case, um, you have too easy a task, and you probably need to find a task that's more challenging to work on. Um, yes. So after that, you'll then sort of think about what could be a good kind of neural network model that might do well, implement it, test it um, see what kind of errors that makes and you know, that's sort of if you've gotten that far, you're sort of in the right space for a class project. But, you know, it's sort of hoped that you could do more than that. But after you've seen the errors from the first version, you could think about how to make it better and come up with a better project, and so I would encourage everyone, you know, you really do want to look at the data, right? You don't just wanna be sort of having things and files and run and say "Okay, 0.71. Let me make some random change 0.70. Oh, that's not a good one," repeat over. You actually want to be sort of looking at your dataset in any way you can. It's good to visualize the dataset to understand what's important in it that you might be able to take advantage of, you want to be able to look at what kind of errors are being made because that might give you ideas of how you could put more stuff into the model that would do better. Um, you might wanna do some graphing of the effect of hyper-parameters, so you can kind of understand that better. And so, the hope is that you will try out some other kinds of models and make things better. And sort of one of the goals here is, it's good if you've sort of got a well-setup experimental setup, so you can easily turn around experiments because then you're just more likely to be able to try several things in the time available. Okay. Um, couple of other things I wanted to mention. Um, one is sort of different amounts of data. So, it's really, really important for all the stuff that we do, that we have different sets of data. So, we have trained data, we have dev test data, we have test data at least, and sometimes it's useful to have even, um, more data available. So, for many of the public datasets, they're already split into different subsets like this, but there are some that aren't. There are some that might only have a training set, and a test set. And what you don't want to do is think, "Oh, there's only a training set and a test set. Therefore I'll just run every time on the test set." That- that's a really invalid way to go about your research. So, if there aren't dev sets available or you need to do some more tuning, and you need some separate tuning data, you sort of have to, um, make it for yourself by splitting off some of the training data, and not using it for the basic training and using it for tuning, and fo- as dev data. Um, yes. So, to go on about that, um, more, more. So, the basic issue is this issue of fitting and overfitting to particular datasets. So, when we train a model, um, on some training data, we train it and the error rate goes down. And over time, we gradually overfit to the training data because we sort of pick up on our neural network f- facts about the particular training data items, and we just sort of start to learn them. Now in the old days, the fact that you overfit to the training data was seen as evil. In modern neural network think, we don't think it is evil what we overfit to the training data because all neural nets that are any good overfit to the training data, and we would be very sad if they didn't. I'll come back to that in a moment. But nevertheless, they're overfitting like crazy. So, what we, but and what we want to build is something that generalizes well. So, we have to have some separate data, that's our validation data, and say look at what performance looks like on the validation data. And commonly we find that training up until some point, improves our performance on separate validation data, and then we start to overfit to the training data in a way that our validation set performance gets worse. Um, and so, then, further training on the training data isn't useful because we're starting to build a model that generalizes worse when run on other data. But there's- the whole point here is, we can only do this experiment if our validation data is separate from our training data. If it's the same data or if it's overlapping data, we can't draw this graph. Um, and so, therefore, we can't do valid experiments. Um, now you might think, "Oh, well, maybe I can, um, do this and just use the test set of data." Um, but that's also invalid, and the reason why that's invalid is, as you do experiments, you also start slowly over fitting to your development data. So, the standard practice is you do a run and you get a score on the development data. You do a second run. You do worse on the development data, and so you throw that second model away. You do a third experiment. You do better on the development data, and so you keep that model and you repeat over 50 times. And while some of those subsequent models you keep, are genuinely better because you sort of worked out something good to do. But it turns out that some of those subsequent models only sort of just happened. You just got lucky and they happened to score better on the development data. And so, if you kind of keep repeating that process 60 or 100 times, you're also gradually [NOISE] overfitting on your development data, and you get unrealistically good dev scores. And so, that means two things. You know, if you want to be rigorous and do a huge amount of hyper-parameter exploration, it can be good to have a second development se- test set, so that you have one, that you haven't overfit as much. And if you want to have valid scores on te- on as to what is my actual performance on independent data, it's vital that you have separate test data that you are not using at all in this process, right? So, the ideal state is that, for your real test data, um, that you never used it at all until you've finished training your data, uh, training your model, and then you run your final model once on the test data, and you write up your paper and those are your results. Now, I will be honest and say the world usually isn't quite that perfect because after you've done that, you then go to sleep [NOISE] and wake up thinking. "I've got a fantastic idea of how to make my model better." and you run off and implement that, and it works great on the dev data, and then for you, run it on the test data again and the numbers go up. Um, sort of everybody does that. Um, and you know, in modicum it's okay, you know, if that means you occasionally run on the test data it's not so bad, um, but you really need to be aware of the slippery slope because, if you then start falling into, "I've got a new model. Let me try that one on the test data. I've got a new model. Let me try this one on the test data." Then you're just sort of overfitting to the test data, and getting an unrealistically high score. And that's precisely why a lot of the competitions like Kaggle competitions, have a secret test dataset that you can't run on. So, that they can do a genuine, independent test on the actual test data. Okay. Um, let's see, um, a couple more minutes. So, yeah, getting your neural network to train. Um, my two messages are, you know, first of all, you should start with a positive attitude. Neural networks want to learn. If they're not learning, you're doing something to stop them from learning. And so, you should just stop that, and they will learn because they want to learn. They're just like little children. Um, but, if the follow up to that is the grim reality that there are just tons of things you can do that will cause your neural networks not to learn very well or at all, and this is the frustrating part of this whole field because you know, it's not like a compile error. It can just be hard to find and fix them. And, you know, it is just really standard that you spend more time dealing with trying to find, and fix why it doesn't work well and getting it to work well than you- than the time you spent writing the code for your model. So, remember to budget for that when you're doing your final project, it just won't work if you finish the code a day or two before the deadline. Um, so, you need to work out what those things are, "That can be hard," but you know experience, experimental care, rules of thumb help. So, there are just lots of things that are important. So, you know, your learning rates are important. If your learning rates are way too high, things won't learn. If your learning rates are way too low, they will learn very slowly and badly. Um, initialization makes a difference. Having good initialization often determines how well neural networks, um, learn. Um, I have a separate slide here that I probably haven't got time to go through all of on sort of for sequence [NOISE] models, some of the tips of what people normally think are good ways to get those models, um, working. But I'll just say this one last thing. Um, I think the strategy that you really want to take is to work incrementally and build up slowly. It just doesn't work to think, "Oh I've got the mother of all models, and build this enormously complex thing, and then run it on the data, and it crashes and burns." You have no idea what to do at that point, that the only good way is to sort of build up slowly. So [NOISE] start with a very simple model, get it to work, add your bells and whistles, extra layers and so on. Get them to work or abandon them. And so, try and proceed from one working model to another as much as possible. One of- another way that you can start small and build up is with data. The easiest way to see bugs and problems in your model, is with the minutest possible amount of data. So, start with a dataset of eight items. Sometimes it's even best if those eight items are ones that are artificial data that you designed yourself because then you can often more easily see problems, and what's going wrong. So, you should train on that, um, because it's only eight items, training will only take seconds, and that's really, really useful for being able to iterate quickly. And you know, if you can't have your model get 100 percent accuracy on training and testing on those eight examples, well, you know, either the model is woefully under powered or the model is broken, and you've got clear things to do right there. Um, when you go to a bigger model, um, the standard practice with modern neural networks is, you want to train your models. You want models that can overfit massively on the training set. So, in general, your models should still be getting close to 100 percent accuracy on the training set after you've trained it for a long time because powerful neural network models are just really good at over-fitting to, and memorizing data. Um, if that's not the case well, you know, maybe you want a bigger model. Maybe you want to have higher hidden dimensions or add an extra layer to your neural network or something like that. You shouldn't be scared of overfitting on the training data. But once you've proved you can do that, you then do want a model that also generalizes well. And so, normally the way that you're addressing that is then by regularizing the model, and there are different ways to regularize your model, but we talked about in the assignment, doing dropout. I mean, using generous dropout is one very common and effective strategy for regularizing your models. And so, then you've, what you want to be doing is regularizing your model enough that the curve no longer looks like this, but instead that your validation performance kind of levels out, but doesn't start ramping back up again, and that's then a sort of a sign of a well regularized model. Okay. I will stop there, and then we'll come back to the question-answering project on Thursday.
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_18_UCLA.txt
okay I've got a study sheet for the final exam similar to the study sheet for the midterm let me just emphasize the final exam is cumulative it covers what we've done since the midterm but also the material that was on the midterm the study sheet is not cumulative okay the study sheet is just the stuff since the midterm so the the study sheet for the midterm lets you know what the key terms are for that material actually can I grab one of those before you pass them on thank you okay this is just hot off the presses on I was really pestering the room assignment guy to give me the rooms for the final exam so that I could put him on the study sheet and he couldn't do it and couldn't do it and then as soon as i xeroxed all those copies he gave me the room so it's not in your study sheet it is on the board okay the final exam is one week from today it's from three to six pm okay late afternoon Adams and Emily's sections will take the exam in this room june and flory sections will take it in pearl off 1112 1102 this this room right here off the top of my head I'm not thinking where pearl off is but you guys have a week to figure that out is it one of those buildings let's by the law school it's it's real close just right across the street here okay good very good so I'll be splitting my time between the two rooms you will not need a blue book for the final exam okay in contrast to the midterm i'm going to give you space on the sheets to write ok so don't need to bring a blue book same story with calculators as it was on the midterm you can use what if you want you don't really need one ok there's no need to turn fractions into decimal points or anything like that all right today we're going to finish off our discussion of grim trigger everything that you're responsible for on the final exam will be covered today the main thing we're going to do on Thursday will be review for the mid for the midterm for the final so one thing you might want to do before thursday is take a look at that study sheet take a look at some of the problems go back over your notes Thursday will be basically the agenda will be up to you guys okay you can ask questions I'll do my best to answer them if you know already that there's something you'd like a more help on so you'd like me to go over again if you email me beforehand i can probably prepare a more elaborate more organized answer than I'm necessarily able to do off the top of my head sometimes I do okay with that sometimes not so okay so you can give me a heads up about things that you'd like to cover on Thursday that'd be good for everybody alright so the first thing I want to do today is I want to talk about repeated prisoners dilemmas where the payoffs aren't symmetric okay where the two players aren't completely identical in terms of their payoffs you've got a problem like this on your homework that's the reason why I extended the homework deadline to Thursday was because I hadn't yet done the example so I'm going to just make a small change to the prisoner's dilemma we were working with last week so that the players aren't completely the same and their payoffs are not completely symmetric so we're going to leave player a the same as player a was last week same choices slack off work hard work and a spay offs are intended to be exactly what they were in last week's game okay a reward payoff of three if a works hard and be works hard a punishment payoff of zero if both a and B slack off a temptation payoff of five that a gets when a slacks off and be works hard and a sucker payoff of minus two which is what it gets when she works hard and beat slacks off okay so what I'm going to do here is I'm going to leave these payoffs from working hard the same as they were last week but i'm going to make be a little bit lazier than a okay i'm going to say that these payoffs from slacking off or a little bit higher than they work okay so he's getting one instead of 0 here and six instead of five for the temptation payoff ok so just being clear what I mean about asymmetric payoffs here is that now when v is in an identical situation to a like in this case where B is working 20.2 B's pay off where B is slacking off and a is working hard okay in this case these payoff is different from the payoff that a gets when a is slacking off and B is working hard similarly B's payoff from slacking off with a slacks off is a little bit higher here so they're not completely the same and what this means is we're going to have to look at grim trigger separately for each player last week when the two players had exactly the same payoffs we found that as long as both of them had a discount factor of at least point for then grim trigger would be an equilibrium okay by that just repeating we mean that if the other one is playing grim trigger neither one has any regrets from playing grim trigger they won't do better by deviating to an alternate strategy we only needed to do the analysis once for those two players in the case where their payoffs were the same but now the payoffs are different so we're going to have to analyze grim trigger first from the point of view of player a and second from the point of view of player beat what it means is that the minimum discount factor for a is going to be different than the minimum discount factor for beat ok that's just a lot on the line here when the payoffs are asymmetric then each player has a different condition for grim trigger to be a Nash equilibrium okay what I mean by condition is a different minimum discount factor okay well this isn't too bad because I didn't change as payoffs from last week's game okay so we know what a's condition yes we know that if let's write the question up here we're looking at when is grim trigger a nash equilibrium for player a what we need is a discount factor that is at least two fifths and now because each player has a different discount factor i'm going to put this little a here okay this tells us how impatient it could possibly be as long as a dollar tomorrow as worth forty cents or more to a today then yes grim trigger will be an equilibrium for player a okay but we have to look at player be separately another issue that i want to go over in the context of this example is what part of the deer of what part of the Nash equilibrium argument you guys have to derive each time okay last week you'll remember i look to see whether a could gain by deviating to three different types of strategies okay i compare the payoffs from grim trigger to the payoffs from defecting unconditionally okay that's one thing that a could do differently than grim trigger I compared it to the payoffs from cooperating for a while and then affecting that's another thing that a could do that's different from grim trigger and I compared it to the payoffs from defecting and then cooperating later and what I showed you last week is that the most tempting deviation the alternative strategy that is going to pull a away from grim trigger is unconditional defection okay I showed that if a doesn't think that unconditional defection is a better choice than grim trigger none of those other possible defections are going to be a better choice by there you guys can make use of that ok on your homeworks and on the exam you don't need to go through deviation number two and deviation number three I gave you that information and is yours to use what you do need to do for each problem is to see when grim trigger is at least as good as what you get from unconditional defection because that the answer to that the condition that makes that true is going to vary from game to game okay so just again to repeat the part of last week's argument that applies for grim trigger in any repeated prisoner's dilemma is the fact that the most tempting deviation is unconditional defection okay that's always the thing that you're going to worry about okay and you you don't need to prove that you can use that fact what you need to do then is apply that package say okay win for this specific game does unconditional defection look like a better alternative okay so that's what we're going to do right now what I'm going to do now on the board is the part of the analysis that you guys do have to go through for every repeated prisoner's dilemma that you analyze okay so what we need to do is look at when is the present value to be of grim trigger at least as good as the present value to be of i'm just going to write the fact here but what I mean here is defect always always play the dominant strategy in the stage game so this is the inequality that's going to give different conditions depending on the payoff so let's get an expression for each side of the inequality so I'm be the present value to me of playing grim trigger if my partner is playing it what am I going to get in the first round I'm Plan B and playing grim trigger what am I going to do in the first round I'm going to cooperate ace playing grim trigger to right that's the Nash equilibrium condition what's my payoff going to be 3 ok what about next time three again right i play grim trigger and a play grim trigger movie both cooperated and grim trigger says keep on cooperating as long as there have been no defection ok so I'll get three in the next period that's discounted three two periods from now and on and on and on as we did last week I can reduce this infinite geometric sum I can use the formula and write that as 3 over 1 minus Delta ok i'm applying that formula for the sum of an infinite geometric series the formula works when the discount factor is strictly between 0 & 1 & discount factors are strictly between 0 & 1 so that's all good all right what's the present value I'm player B of defecting sex what do I get in the next period not zero okay I changed both of these payoffs okay what do I get in the second period one again right I'm going to get these forever ok so I'm going to have to do a little more work here to put together the to put this in the form of the geometric song so what I'm going to do here and make sure you write out a few more terms still 2 cubed plus Delta to the fourth okay remember what I said last week sometimes you'll see Geometrix sums that don't completely fit the formula ok this side of the equation doesn't completely fit the formula for the geometric song ok and there's two issues one issue is that there's this 6 here ok that's not a problem right we can just carry the six around by itself and just focus on this but there's still an issue here ok there's still an issue here in that what we're adding up doesn't start with one it doesn't start with Delta to the zero power it starts with with Delta ok so what we need to do is we do a little bit of algebra here on this infinite sum remind us that it is in fact an infinite come here to get it so that it matches are exactly our formula ok and remind you of what the formula looks like the formula is for the sum as i goes from 0 to infinity of delta to the I power and if we write that out let's write it out so we can these at it the first term in the sum is Delta to the zero power well anything to the zero power is the number one and then we get delta plus delta squared still too cute so there's there's actually several ways to manipulate this to get an expression that exactly fits the sum the one that is most appealing to me is I'm just going to factor out a delta here okay so this is 6 plus delta x 1 i'm factoring out a dell to hear so here's one of my deltas here's the other one I'm factoring out a delta here so here's one of my deltas here's the other two another thing we could have done is we could have just broken up the 6 into 5 plus one and done it that way you get the same the same thing either way okay so now what I have here in my parentheses is something that fits the formula okay and that's what I'm looking for yeah all right so the present value to be of unconditional defection is 6 plus Delta over 1 minus Delta okay so slightly more work very slightly more work and I think I'll stick with with this board so everybody see what I did yeah everybody convinced I did it right alright so back to this question when is this greater than that okay so grim trigger is a Nash equilibrium when the present value of grim trigger that's 3 over 1 minus delta is greater than or equal to the present value of defecting 6 plus delta over 1 minus delta okay that's my condition this by itself in one sense is an answer to the question when is grim trigger in equilibrium exactly when that's true it's not the most helpful form of the answer though the nicest way to think about when we can have cooperation in the repeated prisoner's dilemma is to think about how patient the players have to be that's a natural way to interpret the situation okay how much weight do they have to put on the future in order to do that we're going to take this condition and solve it for Delta that's going to give us our minimum discount factor just as we did before so we did last week to get the tube fits time so working towards that i'm going to multiply both sides by one of them by 1 minus delta to get rid of the fraction okay so multiplying this side by 1 minus delta 3 greater than or equal to 6-6 dealt i'm just multiplying the six through by the 1 minus delta here plus delta okay so this is going to give me negative 5 dealt all bring that over here and bring the three over here 5 Delta needs to be greater than or equal to 3 Delta needs to be greater than or equal to three fifths anything you want to add to that it's something I want to add to this we put this a here to remind ourselves that this is a's minimum discount factor let me build this denotes ok this is e's conditions so let's put a be there in fact it wouldn't be bad to have used to be here all along this whole analysis just applies to person B okay last week we were looking for a single discount factor because we were assuming in the game that the two players were basically identical they had the same play the same payoffs in the repeated prisoner's dilemma so they were going to have the same condition here they have different payoffs so there's a different condition that applies to B's discount factor then there would be days so here we have these minimum discount factor so if both of these guys had I'm going to give you a question just a second Monica if both these guys had a discount factor of point five grim trigger wouldn't be in equilibrium okay a would be willing to play a having a discount factor of four or five a would be willing to play grim trigger fee was doing that but be wouldn't ok B would be too impatient ok these fact discount factor point five would be less than three fifths and what that would be saying is that be would rather just affect ok and this if you think about it this makes sense what did we do we may be lazier okay we made these payoffs from slacking off higher ok so the room what that does is it makes these temptation in the short run to defect higher and we didn't increase these payoff from cooperation okay so those future rewards are smaller for be compared to what he's getting from defecting right now in the present okay and that's the reason why be has to put even more value on the future than a does no Monica you had a question this part right here yeah well I was doing kind of a lot of steps here so what I was doing is one minus Delta here 1 minus Delta here 1 minus Delta here canceling and then also distributing the parentheses hi tops ok okay so the take-home point here when you have players with different payoffs you can it's still a prisoner's dilemma okay even though the payoffs are not perfectly symmetric it's still the case that both players have a dominant strategy to defect okay it's a dominant strategy for a because 0 is greater than negative 2 and 5 is greater than three it's a dominant strategy for be because 1 is greater than negative 2 and 6 is greater than three okay so they both got that dominant strategy and when they both play the dominant strategy the outcome we arrived at is Pareto inferior to what they would have done if they played the dominated strategy that's what makes it a prisoner's dilemma don't need to have exactly the same payoffs where you have asymmetric payoffs you have to have a condition for each player and they will be different okay how many of you guys did the arm did this part of the problem without me showing you so curiosity a few tentative small hands not many people huh most of you just didn't try that's fair that's where I know you're busy this time of the quarter alright so the last thing I want to do with yeah yeah so so always kind of looking for more of the logic here it's kind of it's a big picture question in some sense why is it that making the payoffs different makes the minimum discount factors different okay these the ones here these are the numbers that are different for player B than they were for player a ok so one way to i'm going to i think i'm going to try to answer that question a couple of different ways one way to see it is to just kind of look at the math of it when we're looking for the answer to this question we're always looking at this in a this kind of inequality when is the prep this counted present value of the strategy and question at least as good as the alternative strategies ok so always writing down this inequality and then looking to the game to figure out a cable what is the present value of grim trigger what is the present value of defect at this step when we start putting numbers and variables on the present value that's when we start bringing in numbers from the payoffs okay so one way to just kind of see the math of it is since the details of this inequality come from the payoffs that's one reason why we have to do it differently ok so just the pure cranking away the algebra you see that if you change a number here that's going to go into my condition that it's going to change the value of delta the other way to think about that is to think about it from kind of the logic and intuition point of view ok if you're changing the stakes of the game and I'm really focusing here on changing the temptation payoff although all of them matter what you're doing is you're increasing B's temptation to defect ok you're increasing what begets if be double crosses a the first time around ok if B slacks off at a works hard ok so all of a sudden be has more to gain from slacking off now all even given that she'll be punished for it in the future than a does what that means is in order for being not to slack off now in order for that temptation to slack off to be offset the future has to loom even larger okay so that's again at the logic be has to care more about the future to be willing to overcome this greater payoff okay and actually that with that question in mind what I want to do is I want to go to a more general grim trigger example one that has more variables in it to make a few other general points about grim trigger and about cooperation in the repeated prisoner's dilemma and kind of my idea here is to take our focus less on specific numbers in a game and more on the general logic okay so that's what I'm going to do I'm going to erase going towards the game so if you think of questions along the way speak up ok so something I've been saying all throughout the course is that those numbers in the payoffs are nice for learning game theory a kind of nice and concrete and we can get answers that we could talk about but they are just a complete fiction okay we don't really know we can't really a sign number is two peoples utility in most situations that a lot but what we often can do is we can rank outcomes and along whenever I brought this up I've said that when social scientists use game theory in general we usually don't put numbers on the payoffs we don't put numbers on all of the payoffs we use variables and make assumptions about the variables the prisoner's dilemma is used so often that there's even kind of a conventional set of variables that you'll see used over and over again and if you find yourself you know writing a paper for a class or wanting to use this kind of logic this is a nice set of variables to use it's the general form of the prisoner's dilemma okay so I'm going to also do use the general language the choices are to defect and cooperate okay we're going to have two variables here that are going to correspond to the temptation and the sucker payoff but we are going to use numbers for defect defect and cooperate cooperate but these are nice numbers okay what's the difference between 1 and 0 well that's kind of arbitrary we don't know what these payoffs what kind of units they're really measured in but we're staying here well let's just define the difference between 1 and 0 is being the difference between what we both get if we both follow our own short-term incentive versus what we could have if we both cooperate it okay so one in zero are nice simple numbers to work with let's put them in the logical places in the game and then the rest of the details the details that would distinguish one particular scenario from another let's leave those as variables okay and in particular let's have a be a number that is strictly greater than one and this is going to be the temptation pay off okay so here's bees temptation pay off when I defect and a cooperates here's a temptation pay off when I defect and B cooperates and let's put in minus be here as the sucker pay off okay so be will say is a number that's strictly greater than 0 and therefore negative B is the sucker pay off okay so these are variables if I if this was a game where asymmetries were important I could always index these with beef this is player a and this is player B I could say this is a temptation pay off this is b's temptation pay off this is b's sucker pay off this is a's sucker pay off here I'm not going to do that for the rest of the analysis I'm just saying that's what you would do okay all right so we're going to stick with the idea of symmetric payoffs here which again I'm I guess it's good you know way this last week to be touching base with a lot of the issues that we raised in the first week you might be wondering well why am I I just showed you how you could set up a game and allow the two players to have different temptation payoffs different sucker payoffs and then I just said well I'm not going to do that right now why am I making that decision why do people make that kind of decision it depends completely on the context if i want to emphasize the point about one player being different from the other if that's the context in which im using games here in which I'm making a game theoretic argument then yeah I'm going to make sure that my variables are different if that's not the main point if it's not the main point I'm going to make everything else as simple as possible okay when you use game theory when you do any kind of mathematical modeling that's the kind of rhetorical strategy you want to use you want to think about what is your main point let that be the focus of the game and keep everything else as simple as possible okay so right here we're not focusing on differences between the players we're just going to focus on the general structure that's the reason why I am um assuming that the payoffs are the same these two conditions are important right if either of these conditions fail the game's not a prisoner's dilemma okay if a is not strictly greater it has to be a strict inequality it can't be greater than or equal to if a is not strictly greater than one then we're not going to have a dominant strategy to defect and similarly if B is not strictly greater than zero we're not going to have a dominant strategy ok so both of these are the conditions that make the game a prisoner's dilemma okay only a prisoner's dilemma if in this scenario in the set up a is strictly greater than 1 B is strictly greater than 0 okay so because we're back in the world of symmetric payoffs the minimum discount factor for grim trigger to be an equilibrium in this game is going to be the same for both players so I can just do it once I'm so I can ask my question when is grim trigger a Nash equilibrium okay as always the answer is when the present value of grim trigger is greater than or equal to the present value of unconditionally defecting just exactly the same as before so going right along I'm playing grim trigger what am I going to get when we going to get in the first round 1 i'm going to get one in the first round i'm going to get one in the second round one and the third all the way out here to infinity so I've got here it's very nice with this payoff of one here it's just the infinite stream sums to 1 over 1 minus Delta what do i get if i defect what do i get right now this first round no no you said it hey okay if we both the fact we get 0 okay let's go back and emphasize both of these conditions assume the partner plays grim trigger okay so what this means in the first round is regardless of what the player whose actions were looking at whose strategies were looking at does the partner will cooperate okay because the partner is playing grim trigger okay can't emphasize enough that this is the Nash equilibrium analysis we're comparing one players payoffs holding the other players strategy constant okay so regardless of what this player does the other player is going to cooperate in the first round but now in the second round what's the opponent going to do now the opponent is going to defect okay the opponent is still playing grim trigger the opponent is playing the equilibrium strategy but the equilibrium strategy says if you defected against me last time or anyone defected I'm going to defect and now in the future all of our payoffs are going to be zero i'm just being totally redundant writing this in here to emphasize that all future playoffs will be 0 in this formulation ok so again you see how nice and simple this off this version of the repeated prisoner's dilemma ass okay so putting these two things together grim trigger is a Nash equilibrium when one over one minus Delta is at least as big as a okay what I get now and over the whole indefinite future string of interactions what i get is at least as good at what i get from defecting now and then taking my punishment in the future so there's my condition right and actually it's kind of interesting to look at it this way okay here we have a by itself so one way we can interpret this condition is this gives us the maximum value of a okay so you tell me how impatient you are and i'll tell you how big a temptation you can face in the short run in order for grim trigger to be equilibrium once the temptation remember that's what a is a is the temptation payoff here what i get when i defect and you cooperate once a gets above this maximum value it doesn't matter that i care about the shadow of the future i don't care enough the temptation is just too strong long as the temptation is below this value I'm okay okay so this is one way to think about the grim trigger condition the same condition if we do what we did before can still give us a minimum discount factor as well okay so if i just multiply both sides by 1 minus delta i get that one has to be greater than or equal to just as I did before i'm multiplying 1 minus Delta times a so I'm getting a minus Delta a so Delta a greater than or equal to a minus 1 and Delta has to be greater than or equal to a minus 1 over a right so it's the same condition these are just alternative ways of writing the same thing these are both alternative ways of writing this basic Nash equilibrium inequality this way gives us our maximum value of Delta so two ways to understand the conditions that support grim trigger as a way of getting cooperation and repeated prisoner's dilemma one way is to say you tell me how impatient the people are and i'll tell you how big their temptation can possibly be before grim trigger will no longer work okay i'll tell you the maximum temptation the maximum temptation is going to change depending on how impatient that people are or you could switch the walls you could say you tell me how big the temptation is in the short run and i'll tell you how patient they have to be just different different ways of looking at the same scenario but they're both this is kind of balance aspect here the way that you put on the future how much you value those future high payout some cooperation have to be balanced I has to at least balance the temptation to defect in the short run if the temptation is too high grim trigger doesn't work if the shadow of the future is too small grim trigger doesn't work okay I also want to say something about this okay this gives us actually some very good news if we look at this for any value of a that's greater than 1 okay remember we're only interested in values of a that are strictly greater than 1 because otherwise the game is in the prisoner's dilemma and this whole story doesn't make sense as long as a is greater than one okay this is always a number that is between 1 and 0 okay no matter how big a get a could be say a equals three thousand then Delta has to be greater than or equal to 2999 / 3000 okay that's a number that is extremely close to one but it's still not one for any temptation that you face there is some degree of patients that will allow people to cooperate in the prisoner's dilemma you can never have a temptation so high that nobody will ever cooperate it might be an unrealistic level of patience okay now that's a darn high discount factor it's unrealistic but it's not truly impossible okay what this is saying is that you can never completely rule out grim trigger as a possible way of cooperation that is like really really good news no matter how high the temptation payoff is no matter how perverse the incentives are in a particular repeated prisoner's dilemma if people are just patient enough they can cooperate if people are just patient enough grim trigger can be in equilibrium and we can get to this happy outcome or we're cooperating we're both doing better over and over over again instead of both of us ending up with this cruddy pay off because we follow our short-term incentives so that is the point that in one chance I think is not obvious I think when I just wrote this equation here maybe some of you thought oh boy that means that we can always have grim trigger but I bet a lot of you just didn't go there was certainly not obvious to me until somebody showed this but on the other hand it also just kind of fits with a lot of common sense things that we know about patients okay one thing I will bat is that all of you guys you guys are all here at UCLA you've done well in your lives your parents did a good job of raising you I'm guessing in most cases that they deserve some credit and I bet one of the things that many of you got from your parents from a very early age is encouragement admonishment your parents tried to get you to be patient okay don't give in to impulses think about the future think about the long-term consequences be willing to eat your string beans first so you can have dessert later that kind of logic is an aspect of child-rearing all over the world and I think this is the reason the better human beings are at deferring ratification okay at doing something a little bit unpleasant in the short run not giving in to a short run temptation because of future rewards the better they two societies are better when people are more patient when people can forego the temptation to follow their dominant strategies in the short run because of future benefits okay so that aspect of folk wisdom that um a penny saved is a penny earned off wait for dessert last that that kind of logic is reflecting this idea that patience is not only good for the individual it is good for the individual but it's also good for society all right all right I think I have one more thing one more really important thing to say and then a few more interpretive observations to make okay so right here you can tell I got the wind in my sails i'm giving you really good news this is so wonderful if people are just patient enough the shadow of the future looms large enough we can solve those prisoners dilemmas we can have an equilibrium where those cooperation okay that is all really good now I have to go back and give you the bad news it's not the only equilibrium okay grim trigger can be an equilibrium and it's a great equilibrium we're playing grim trigger even though grim trigger has that very grim punishment phase we're not going to see it all we're going to see is good behavior people overcoming their short-term incentive to cheat because of the shadow of the future okay so that's good the problem is there are lots more equilibrium okay and here's the worst one we've been thinking about unconditional defection as an alternate to grim trigger right is something that players could do differently than grim trigger we can also think about it as an equilibrium strategy as well okay so let's think about that here's another Nash equilibrium both the players defect always that's a strategy right it's a strategy in the reputed game it tells me what to do no matter what it's very simple and if my partner is playing it do I have regrets from playing it know if my partner is such a bad guy if no matter what I do he's going to the fact i'm not going to have regrets from being a bad guy to him okay if my partner is always falling following through on the short term incentive if my partner is going to defect no matter what then my if i'm always lies the real player i'm always in this column then my choice is between zero and the sucker payoff I'll take the zero okay so yes there's always some discount factor for which grim trigger is in equilibrium for which we can get cooperation in the repeated prisoner's dilemma but over here no matter what the discount factor is this is always an equilibrium even as we're both super duper patient put a lot of weight on the shadow of the future if what I think you're going to do is the fact all the time even if I values a few troll want i'm going to detach all the time and vice versa okay so the stage game dominant strategy is always also a Nash equilibrium that's the bad news so in lots in any situation that fits a prisoner's dilemma it's always possible they have cooperation in equilibrium if the players are patient enough it is always also possible to have nothing but defection nothing but bad news regardless of how patient the players are okay so some very good news on the one hand some very sobering news on the other okay now when we first talked about multiple equilibria in single-shot games we talked about it in the context of the prisoner's dilemma versus the assurance game okay the assurance game was the game that looks like the prisoner's dilemma except the temptation payoff was less than one okay in the assurance game remember if your partner is cooperating you want to cooperate to we talked about this with the room cleaning example and I talked about the roommate that had a conscience and felt bad if her roommate cleaned without her okay and then we had this discussion about the fact that there was a good equilibrium when they got the reward payoff and a bad equilibrium when they got the punishment payoffs and one question that came up was doesn't the fact that the one equilibrium is better for both players doesn't that by itself make that equilibrium focal and I gave kind of a wishy-washy answer to it it was wishy-washy but um I think it was true sometimes yes sometimes no sometimes the pure fact that grim trigger or any other kind of pattern of cooperation is better for both players make that outcome more likely both players can say yes we're both better off won't we go there not always in particular if there's been a history of bad behavior and both players expect the other one to the fact it's going to be a self-reinforcing pattern okay so a little bit of yin-yang here at the end of poly SCI 30 good news and bad news about prisoners dilemmas okay all right what I think I'm going to do is um we're going to do course evaluations now and I'm deliberately doing that because on thursday i will open the class with a few other remarks on prisoners dilemmas we're now finished with everything you need to do the exam do the homework okay we're finished with what you're responsible for the course I'm saving a little bit of discussion stuffed if we need an icebreaker next week but I am going to ask you to do the course evaluations um I hope that you will take some time if you have any specific comments to write them on the back I always read them and do the best I can to to implement them for future classes I'm not supposed to stand here and you know glare at you guys when you do them so i'm going to ask flory to pass them out and can you also put this on the board anyway great thank you so then i'll see you on Thursday
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_3_UCLA.txt
okay so as promised last week on on Thursday I set up and solve the game for you we talked about it a little bit but as I said on Thursday I didn't say all I wanted to say about that game and indeed you might not have had the opportunity to say all you want to say about the game so I'm going to just put it up very quickly go through it more quickly than we did on Thursday and then I'll be jumping into first start off with some observations about payoffs okay but just to refresh your memories this game has two players an incumbent member of Congress and a challenger they're both deciding whether to raise funds or not they both want to win the election only one of them can um they both although what they both don't like fundraising either okay the game had a sequence to it so we represented it with a game tree with the first mover's decision depicted at the top decision node I'm trying my best to use all those vocabulary terms I was throwing at you on on Thursday the incumbents decision node has a branch for each possible action the incumbent can take and we were simplifying the situation a lot we said the incumbents choice was just raised funds or not I'm abbreviating and then whatever the incumbent did the Challenger got to react okay so the Challenger has two decision nodes but again I'll emphasize that only one of these is going to happen okay I'm ever going to go down this path of the tree the incumbent raises funds and the Challenger is then going to decide whether she wants to raise funds or not or we're going to go down this branch of the tree okay so the Challenger has two possible decisions that she will have to make but only one of them is actually going to appear in reality only one only one branch is going to be taken okay we set up the tree in the natural sequence thing that happens first at the top and later decisions coming lower in the tree at the very bottom of the tree we put the terminal nodes and what's in the terminal nodes are the payoffs okay we put numbers one for each player that indicates how well they like the outcome associated with this particular set of decisions by all players okay Thursday we spent a lot of time talking about the outcomes talking about what would happen in the case that both candidates raised funds what would happen if just the incumbent did if just the Challenger did neither did we spend a lot of time on me explaining my assumptions to you about the outcomes but I as I said the outcomes don't appear directly in the tree we need to think about them in order to put the payoffs in the tree and I'm going to as I'm talking just put in those payoff numbers that we had last time but the outcomes don't actually occur okay we can't if we somebody has walked into the room right now for the first time and looked at the tree on the board it's a new game theory they could solve the tree but they wouldn't necessarily know what it's about okay they would miss that part of it all right so the payoffs first mover's payoff first that's just a convention you could do it the other way that people don't so when in Rome do as the Romans do when in game theory do as the game theorists do put the first mover's payoff first and these were the numbers we were using the idea of payoffs is very intuitive it easy to think of them as points in a game that you might play for fun and I think you know buddy was hung up by the idea that players want higher payoff and in solving the game the way we solved it working from the bottom up was for each decision node to ask which branch would give the player who controlled the decision node the higher payoff okay so using that idea if we get to this node the Challenger compares her payoff from raising funds to her payoff from not raising funds and I really want to emphasize that we're always comparing payoffs that belong to the same player ok we're comparing the challenge is payoff for one outcome to the challengers payoff for another okay so this is a challenger payoff from what I'm going to abbreviate raise funds raise funds okay and come but raises funds challenger raises funds this is the Challenger payoff from the outcome raise funds not okay raise funds raise funds raised funds not we're comparing Challenger to challenger never in game theory do we need to compare one players payoff to another players payoff we never do that if you find yourself doing that pinch yourself you're doing something wrong okay we only compare payoffs for the same player okay I'm getting a little head in my outline but I think this is something I'm probably going to emphasize more than once we don't make interpersonal comparisons of payoffs okay very taboo to do that and the reason why is this idea of payoffs you may or may not find it reasonable reasonable way to represent people's payoff so I'll talk about how reasonable it is when it might not be reasonable later on okay but it's much more reasonable to think that I as one person can give a higher or lower number to different things different things that I could experience in a way that would reflect my preferences than to think that I can compare how much it means to me to avoid fundraising versus how much it means to you to avoid fundraising okay we can't compare my happiness my utility to somebody else's utility one way to see this is to sort of back off from this idea of payoffs and link them to a broader concept the concept that if you've taken any economics classes I'm sure you've heard the word utility what we're doing with payoffs is exactly the same thought experiment that actually the exact same useful fiction that we do in microeconomics when we use utility it's the same idea a number that represents how good you feel about something in economics utility numbers are often assigned to how good you feel about something you can buy do you get a higher utility from spending all your money going out to dinner and having a small apartment or would you rather have a big apartment stay home and cook your dinner is what that that's the kind of scenario that we have encountered in a microeconomics class here we're more interested in assigning utility to political situations but it's the same idea it's a very old idea the term utility is usually associated with Jeremy Bentham major philosopher wrote in the I guess late 18th century I don't actually know that he coined the term utility but he certainly popularized it and the use that we make of it today owes much to Bentham um one way that people often understand the concept of utility is that it gives us a way to compare apples and oranges that's the cliche ok and you may have heard this cliche expressed the other way people tend to say that if they don't know how to make a choice or they are uncomfortable with a choice that they have to make they'll say well is this a better Apple than that isn't orange I don't know you can't make that comparison okay so one sense you can't make that comparison in a global sense of what is the perfect Apple what is the perfect orange we don't need to do there what you can't say though what we do all the time we could not get through the day without making comparisons of do I want an apple now do I want an orange now do I want to speed up and go through the yellow or do I want to stop and we're just constantly comparing situations that are different and the idea of a utility function the idea of assigning utility numbers to different things that can happen it's just a way for us to organize what we think ourselves are doing when we're making comparisons and making choices based on those comparisons all the time it's a way to think about what we are doing ourselves is also and for this class more important a way for us to talk about what we think other people are doing ok so when we're doing social science we're constantly talking about what other people are doing and as I try to emphasize last week we need to be thinking about what our assumptions are about the people we're trying to understand preferences okay utility numbers are the conventional way in economics in game theory in decision theory and management sciences you sort of throughout the social sciences the conventional way to understand people's preferences to understand how people make choices choices that involve comparing apples and oranges comparing not alike things okay so I hope you can kind of see how the utility idea gives us a way to compare apples and oranges if the comparison is apples to apples and say it's quantity do I want three apples or one apple let's say I like apples I'll choose three that's easy I don't need utility for that I can just count apples do I choose one Apple or one orange no I don't know what I'm going to do because they're different things if I map both of those things Apple and oranges into the same thing into a number I can compare numbers okay so by assigning numbers to every possible thing we can compare numbers that's the idea let me emphasize again I don't I can't feel like I can't emphasize enough nobody thinks utility is real okay we don't even think it's something that you know these days they could do an MRI and see what's what part of your brains are lighting up and measure how much dopamine you have floating around in your brain that may and some of those things may indeed give us some insight to how good the person is feeling at a particular point in time that's not what we think utility is okay utility is a useful fiction we don't think it's real it just is a way of helping us talk sensibly and coherently about preferences and we do and gain game theory is based on the idea that preferences are real not that they're always stable but there is enough reality to preferences that we need to think about them to understand how people behave and to understand how people interact so the fiction is useful it's tolerable to think that an individual person can compare apples and oranges I think it's reasonable to assume that it's unreasonable to say the enjoyment I get from an apple is more than the enjoyment you get from an apple we just can't know that okay now if you were taking an analytic philosophy course there are courses in the philosophy department the deal with issues involving utility that actually deal with some issues in game theory as well that question of interpersonal comparisons would be on the table okay so maybe I shouldn't come on too strong and say that we absolutely can't do it what is a more reasonable thing for me to say a more judicious thing for me to say is that in game theory we don't have to do it it's much much dicey err to think that ah we can compare one person's preferences to another and the nice thing is that in game theory we just don't have to so that's that's very good okay that's what I wanted to say about interpersonal comparisons I think some of why you might care about interpersonal comparisons why you might be tempted to make them will be clearer once you start working on your homework problem okay so take-home message here is never never compare one person's payoff to another person's payoff it doesn't make sense it's not part of the standard utility thought experiment the other thing I wanted to say about utility is about these these payoff numbers is I was gliding over them a little bit quickly on Tuesday and something that I said is that the numbers represent the order of preference I think I very glibly sad what you get from the utility number is high number good low number bad and that's true you do get that but you get a little bit more okay utility numbers for a person represent not only the order of preference okay it's not just the higher number is better for the person but it also represents the person's intensity of preference so over here let me just summarize the assumptions I made about the Preferences for both candidates had written it separately on Thursday but there was a clear parallel both candidates we said got a payoff of 10 from the outcome of where they win the election no fundraising okay implicit in this little box of payoffs that I'm going to put up here is that they care about whether they win and two they care about their own fundraising okay so if I'm the incumbent I'd prefer that I don't raise funds but I don't care if you do challenger I care about what the challenger raises funds don't care about whether the incumbent does the other possible outcomes now I've got both candidates together so actually all of these is going to be possible for at least one player on winning with raising funds was worth eight losing no fund raising was worth three and losing raising funds that was the worst outcome we said that was worth one ok so just as I wrote it up there it's from best outcome to worst outcome but in passing what I mentioned was you could think about these numbers as capturing two independent aspects of the situation the way I set it up there are two independent aspects okay so that winning is worth seven points okay how do I know if I hold the fundraising constant the difference between winning and losing is ten minus three that's seven or eight minus one that's also seven okay raising funds we could say is worth what's raising funds worth to my utility with two um negative two all right raise it the way I wrote it here people if I said not raising funds that would be two but raising funds cost you two and I'm emphasizing this what seems to be a minor point of of language because you're going to have to deal with these negative numbers in this problem set and all the problem sets you get okay more so than an economic applications of utility theory in political science good things happen in bad things happen okay it's not just giving up money to get stuff that is good it's making choices that bring about good consequences that bring about bad consequences so when we are putting together the composite numbers that represent our net payoff from whatever outcome we're studying we have to remember that good things increase our utility bad things decrease them okay another way to think about this is when you're reading your problem set I'm pretty sure I use the word cost at some point and that if I didn't in this problem sad I'm sure I will in a future one another way we could just say the same thing is that raising fun coughs two units of utility okay it might seem obvious but every year people get confused with dealing with aspects of choices that cost the players some utility it cost the players some utility it means you subtract that amount of utility so if this was a problem set and I was giving you the listen Arial analogous to the one that I have given you today I might actually write something like what I just said each player values winning at rough at seven points each player dislikes run fundraising and that costs the player two units of utility all right that's the kind of that's the way to take that kind of ordinary language and translate it into a game okay again I said this on Thursday but I think I said it really quickly if we think about winning is worth seven points and fundraising worth negative two points utility units whatever you want to call it there's sort of an implied baseline here what's your payoff when you don't win and you don't raise funds it's up there three okay baseline and this problem is three I think I sort of flip li said that three was the value of the a day job outside of politics not being in office but not raising funds either most of the time we will use a baseline of zero okay we get to pick what our baseline is okay when I set up this example I deliberately didn't pick a baseline of zero because what I want to show you and I think I will show you explicitly in just a second is that it doesn't matter okay but usually having some kind of neutral outcome associated with payoffs with a baseline payoffs make it easy for yourself make it zero yes the baseline the the question is what's your name Neve asked whether the baseline is when you don't have any cost you don't have any benefits that is a natural one to choose as a baseline okay and you will not go wrong by choosing that in some scenarios it's hard to figure out what that should be what I want to let you know is it doesn't matter okay you could pick any one of these outcomes as the baseline and you would get different numbers to represent the utility okay so for example if we said I'm going to let um the best possible outcome be a hundred and that's pretty close to ten you just multiply everything I'm going to let the best possible outcome be five and make everything else relative to that orally the best possible out can be zero and have all the other numbers be negative as long as the relay a shoe ship between the numbers is the same I'll get the same answer in the game and if you think about what I did to solve the game of course that makes sense I'm just comparing the numbers here so you can a pick any um outcome as your baseline B you can pick any numbers you want I could pick these numbers it wouldn't change it I could pick one point eight point three point one that wouldn't change it I actually have a great deal of freedom and picking my numbers what we are going to see probably by the end of the week is that most of the time in social science when we do game theory we don't actually use numbers we use variables and the variables will allow us to do a little bit more okay and those variables are sensitive to the fact that the numbers themselves don't matter as much as the relationship between them you delay the baseline is so let me step back and see the baseline is not part of the game you could think of it as a stepping stone between the story and ordinary language and the game and something that you'll be doing your homework something I'm going to be doing for you throughout this class is explaining the logic of how you go from the story to the game that's an important part of knowing how to do game theory and it's actually let me step back and say it's something you want to do in your homeworks okay the homeworks will say write down the game tree and yes you should write down the game tree but it's actually a good idea to write a little paragraph explaining how you're setting up the game tree that that kind of support statement is part of using game theory to say something about politics picking a baseline outcome is a good way to explain why you're setting up the game the way you do and in many contexts the baseline will sort of be natural like most people would think that that would be a natural one it's actually not clear to me that this has a natural slime losing without raising funds you can sort of see that but I could also see a logic of maybe picking this as a baseline and having everything be relative to it needs suggesting I think it's a good idea that if the game has kind of a status quo okay it's about this is what happens if nobody does anything and here's different ways that people can change it and then react to what another player does that status quo is often a natural thing to use as a baseline okay let me actually I'm gonna leave these numbers here and let's see how am i helping not such a good color let's use blue here another possibility would be just to take the advice that I gave you a minute ago don't make the baseline 3 make it zero remember I'm making these numbers up they're just designed to convey my idea so I can pick a number and 0 is actually a nicer number I'm going to pick I'm going to leave these other two numbers alone that winning is worth 7 points and raising funds gives me negative 2 okay so now I've got a different set of payoffs okay my baseline here is zero now okay if going from that baseline I win and I don't raise funds what's that going to give me for a payoff seven okay if going from this baseline of zero I win but I have to raise funds what's my payoff five okay if going from this baseline I raise funds and I lose what do I get negative two okay so all fine this is actually a good example because one thing it illustrates is that nothing special about negative numbers just treat them the same let's put those alternate payoffs here in the game that doesn't look alternate that looks the same my alternate payoffs here would be in this situation the incumbents payoff is I won I raised funds it's a five what's the challengers payoff what is it negative two very good okay incumbents pay off here five right come but raise funds here okay what's the challengers payoff here zero is my baseline income its payoff here yeah challengers payoff five incumbents payoff here there's that seven is that lovely payoff end right here baseline again okay so it's a different game now instead of the game with the black payoffs it's the game with the blue payoffs but if we solve it the same way we do the same algorithm let's just do it now I'm the Challenger here if I get to this node would I rather have a payoff of negative two or zero I'll take the zero thank you 0 is nothing but at least it's not negative so I will not raise funds if we get to this pat point in the game we go down this path in the tree I'm the Challenger I'm comparing five to zero five is good I'll take that I'm not everyone say halfway two-thirds of the way through solving the game right now um I've decided what I'm going to do at each of these nodes now I replace these decision nodes with the strategic equivalent the street teaching equivalent of this decision node is the payoffs associated with the optimal choice of this node the optimal choice at this node for the Challenger is no the payoffs associated with that choice are eight three okay it's a little bit tricky here is that the strategic equivalent has a payoff for both players but the reason why this is the strategic equivalent and not that is only based on the challenger's which Weis okay so I've just pruned this whole part of the tree I don't have to think about it anymore all I need to know about this node is that the strategic equivalent is 8 3 over here all I need to know is that the strategic equivalent is 0 5 yes um the reason is ok I'm even early today I'm starting to make those mistakes what's your heart okay um I did it right here okay it's um I was pointing to the right place but right now green we emphasize this green is supposed to be solving the blue payoff game okay and the blue path game doesn't have this payoffs in it the strategic equivalent is five zero okay yeah oh that's a particularly pernicious mistake because I think I would have gotten the right answer anyway and those are the worst the ones where I get the wrong answer I can usually correct myself so thank you okay so strategic equivalent now of both of those nodes final step in solving the game back up a level to this decision node the incumbent looks at the two choices and if the strategic equivalent associated with it okay this is the key ingredient in thinking strategically the incumbent does not look at raised funds or not and say Oh raising funds cost me two units of utility doing that would be wrong it would be not strategic when we use game theory to understand politics when you assume that people are too smart to do that okay we assume that the incumbent will say raising funds is a pain in the short run but it is the equivalent of a payoff of five not raising funds is nice in the show run but if I don't do it if I anticipate what my challengers going to do I'm going to get a payoff of zero okay so this by replacing this row of decision nodes with the strategic equivalents I can solve the higher note if it was a game with even more nodes then I would replace this decision node with its strategic equivalent this five zero would bump up even higher and I would just do that till I got to the very top of the tree okay now the point of this example was to emphasize that we get the same result when we use different numbers okay so I hope that ah that part came through clearly here okay all right I think I'm just going to leave the game with kind of a double set of payoffs here um and let me kind of catch my breath a second and emphasize two basic points okay one basic point is that by solving the game we get a prediction about what's going to happen okay we predict that the incumbent is going to raise funds the Challenger is not going to raise funds okay so one thing we get from a game is a prediction about what the players are going to do and sometimes that is indeed what we use game theory for okay that use of game theory is it's used practically in what sometimes called questions of institutional design okay if you don't like what's happening in a situation if you're the professor and you don't like the way the students are performing you might redesign the one institution you can control with it which is the syllabus okay and if you're a game theory professor you think strategically when you do that and you know try to put you guys in a position where your optimal choice will be the one that's going to make you learn the most in more serious situations if you are the United and you are sending blue helmets to a country whose people are killing each other and you're trying to figure out some way to rearrange how people move about how people trade with each other how people make their livings so that there's not such an incentive for violence and crime this actually this happens in a wide variety of situations you might actually set use game trees try to figure out how what people's preferences are that's always a key component on that but then figure out how they would respond to different types of choices and sometimes you're in a position where you can change the choices that are available to people or you're part of a conversation more likely where a group of interested parties are trying to change the choices people can make and how those choices add up to outcomes in a way that would make everybody better off and in that case using a game to figure out what the outcome would be would be helpful so I don't want to denigrate that fact that the game gives us a prediction it's not worth a whole lot in this particular context because we started with the observation that this is what happens okay I didn't need any game theory when I first told you this little puzzle on Thursday that incumbents raise funds so insatiable even though their challengers don't seem to do much and they always win anyway so we didn't need the game to tell us that incumbents would raise a lot of funds and that often challengers would not what we were looking for was an explanation for why okay that's something else we get from game theory we get an understanding of why the predicted outcome occurs more important that understanding of why the predicted outcome occurs always depends on the outcome that didn't occur okay I've been emphasizing this all along and I will continue to because I think it's one of the most important lessons from game theory if we want to understand what is going on in the world if we want understand why it's going on we have to ask ourselves what else could have happened and why not okay what that means whenever we're analyzing a game we don't we're not just satisfied with predicting what's going to happen we also want to make sure and say something about what we think would happen in the counterfactual okay that's that's key to the logic of game theory okay may not get to the full explanation of this today but we're going to be developing this idea of an equilibrium okay a situation where neither player by themselves can improve by making a different choice given that the incumbent has raised funds the Challenger is not going to do better by making different choices than she did and given that the challengers decision is to raise funds of the raised funds that the incumbent doesn't and not raise funds if she does there's nothing the incumbent can do that would bring about a better outcome okay the idea of equilibrium brings with it an idea of what we'll call the equilibrium path the equilibrium path all right it over here what we expect at each decision node okay another phrase for this is well the predicted outcome okay what do we think the guys are going to do what do we think we're going to see in the situation this but just as important is what's going on off the equilibrium path what's going on off the equilibrium path over here is what helps us understand why all right so I want to say some things about I may add some more vocabulary terms here that are going to help us manage this counterfactual part of the solution of a game okay this idea that when we understand the strategic situation we not only understand what's happening on the equilibrium path but what's happening off the equilibrium path and the reason why we need to know what's going on what would be going on off the equilibrium path is that that's the reason why we do observe what we do okay all right so on that you use I'm not gonna use green that's not a good this all right so I think I will just put some a couple of definitions here first of all I'm going to make a distinction between strategies and actions in ordinary language um those two things could mean the same thing in this context what's your strategy what are you going to choose on it they almost seem synonymous in game theory a strategy is a more complicated thing than an action okay so let's say Y okay so they like mine this tray with these fake markers that don't work an action is a choice at a node okay so action is a really simple thing an action corresponds to one branch okay so this is a decision node with two possible actions here's another one with two possible actions at some point we'll probably do an example where the decision-maker has three possible actions at one node guess what they choose the highest payoff of the three or the four or however many nothing nothing too strange about that but action is just this one simple choice one branch in a tree a strategy is composed of actions okay so I'm going to do this definition and more of a complete sentence a player's strategy is an action for each node she controls in the game all right so let's do let's go back to the black payoffs now I'm going to get rid of the blue ones and let's use this board space here to talk about strategies and what is an example of a strategy for the Challenger it would be a strategy for the Challenger no people don't know how to put it in words I think you know what what would be an example of a strategy for the incumbent raise funds okay the incumbent only has two possible strategies in this game raise funds or not for the incumbent there is no difference between a strategy and inaction because the incumbent only controls one decision note okay let me put it the question to you in a little more precise term the Challenger has some strategies in this game what's the challengers best strategy what's the strategy we think the Challenger is going to play say it louder raise funds is that a full strategy when incumbent does not and not when incumbent does the challengers strategy is more of a mouthful that's why it was kind of unfair for me to ask you guys to say I've been usually asking you things that you can answer with one word okay this is one of the challengers for possible strategies okay I'm going to write down another one and I'm going to use a kind of shorthand that you're going to find very useful for your arm for your homeworks okay another possible strategy that the fundraiser has is are F if RF RF if ant okay so I'm using shorthand here another strategy that the Challenger has it's not the challengers best strategy but it's a possible strategy is to raise funds if the incumbent raises funds and to raise funds if he doesn't okay challenger could just be a fundraising Energizer Bunny that's it's a strategy good say not the best one not as good as this but it's a possibility another even shorter way to abbreviate okay so one way to abbreviate and this is actually the way I like to do it and I would encourage you to do it this way on your homeworks and even more so on your exam um because this way to me is still staying in touch with the story here okay I raise funds if the incumbent raises funds I raise funds if the incumbent doesn't I'm saying especially on the test because on the test especially on the midterm you know we only have an hour and 15 minutes here you guys are going to be pressed for time you're going to think I just got a ride as fast as possible and so you're going to be tempted to use the form that they use in the book and let me just show you what it is so another possibility would be on to raise funds if the incumbent does not raise funds okay this is the most terse abbreviation of the challengers strategy and let me decode it for you okay so this is action at naught right left left most node action at next node okay in coming up with a strategy for a player that's the second move or a third mover what you're going to do is you're going to go across this row of decision nodes that is controlled by the Challenger and you're going to list all the combinations of actions at those nodes and one way to do it would just be to say a strategy is raids funds here not here raise funds here raise funds here not here not here not raise funds here raise funds here I think those were the before yeah I don't want to say this is wrong people definitely do it but what I will say is that if you're doing it too fast on an exam even though it seems like you're saving time if you get confused you get to an answer that doesn't make sense you want to go back and check your work you're more likely to understand what's going on in the game if you have it set up like that this is just going to look like symbols okay so this is my preferred way to denote a strategy when you see people denoting strategies just by a string of actions the way to interpret it is that it's the action associated with the left most node first in the rightmost node second in this case where there's just two if there were there was a third decision node over here we'd have to have three actions okay okay challenger has one more arm strategy what's this challengers other strategy don't raise funds no matter what that's exactly right okay so number four is and if RF and if an for both of these abbreviations what you want to be thinking of is that for each decision know that the player has you're going to have that many strings that met many things strung together by commas okay strategies by themselves really is all your actions rather actions by themselves is really all you need to have the full strategy it's just a little helpful I think to have this level of background information in there okay alright so relationship one is strategies versus actions okay so actions are like our atoms they're the smallest thing one branch and a node strategies are combinations of actions sometimes a strategy as simple as in the case with the incumbent sometimes the strategies are a little more complicated if you get big game trees with lots of decision nodes just writing down the strategies can be a challenge okay thankfully we can do a lot of analysis I think we can actually get more insight from small games with few decision nodes than big hairy games with a million decision notes strategies more complicated than actions and then the next step up is equilibria are more complicated than strategies okay so let me which I think I want to leave this and um I've already started writing some stuff about equilibrium over here so let's get the the full definition here okay equilibrium is a strategy for each player such that neither player could do better given the other players ' could be here or could be here however many players there are one other many other strategies okay so we're building up action is composed of Stratton is composed of nothing action is just an action a strategy is composed of actions and when I wrote the definition of a strategy I had to specify that a strategy belonged to a player okay so when you say what's a strategy or you have to single out one player to pick a strategy equilibrium is made of strategies one for each player okay strategy action for each node equilibrium strategy for each player's components are simple but they build up it's not just any old strategies though if the strategy set that composes the equilibrium is really an equilibrium it means that neither player can change can never play can do better by changing unless the other player changes okay so if you keep on doing your thing you're the Challenger I'm the incumbent if you are playing this green strategy here okay if you're going to not raise funds when I do but raise funds if I don't then I can't do better by not raising funds okay similarly if I'm the Challenger and the incumbent is going to raise funds I can't do better by changing my strategy either okay there's nothing I can do if the incumbent is going to raise funds there's nothing I can do to get myself a higher payoff okay the Challenger is part of the equilibrium analysis is kind of obvious because it was part of solving the game we kind of have to step back to see the incumbents part of that that given the challengers full strategy the incumbent cannot do better by changing okay and again that's how we're getting this explanation of why things are the way they are from the game okay after class on Thursday I got a question um the sort of pointed to this ten payoff over here okay great payoff right why can't the incumbent get to that payoff because of the strategy that the Challenger is playing okay in one sense the incumbent can do better in this game than the outcome we're predicting the incumbent could get a tenth but not when the income when the Challenger is playing this strategy okay that's why the incumbents choices equilibrium okay so let me write the punch line here equilibrium in this game is incumbent raise funds Challenger not if incumbent raises funds but raise funds if not on your homeworks on the exams in any application of game theory anytime somebody asks you what the equilibrium in a sequential game is this is the kind of thing they're looking for they're looking for a pair of strategies for each player and each strategy are going to give the definition in a different slightly different language each strategy is a best response to the others okay my strategy is the best response to your strategy your strategy is the best response to my strategy our choices reinforce each other ok you're the incumbent given what you did I'm the Challenger I'm glad I didn't raise funds it's a little bit harder to see from the incumbents side though but now we're able to see it if I'm the incumbent given that this is your whole strategy including the off the equilibrium path start part of your strategy given what you did and what you would do I don't regret what I did ok so the second mover strategy has to be both what they do in equilibrium and what they would do otherwise because what they would do otherwise is often the reason why the first mover makes the choice that she does ok so let me just emphasize that this just a we stated definition of equilibrium okay same idea just in different words okay another thing another vocabulary thing I want to alert you to is the book calls this roll back equilibrium what we do when we solve the game with the roll back process is we find the equilibrium that's been at the key at the core of everything that we've done so far okay roll back later in the course we'll start to talk about what that means right now roll back is the only kind of equilibrium we know about I'll just sit tight with that when we do the roll back process what we get are the strategies in a sequential game that satisfy this idea of equilibrium okay what makes rollback special is that it only applies to games where first one player moves and then another player moves what we're going to be getting to in the probably the second two-thirds of the class are is the same idea of equilibrium but in cases where the players make their choices at the same time those are a little bit harder ok see what else I all right yeah I want to do my I want to do one last thing I think we've just got the right amount of time to do it this difference between strategies and actions then action is a simple thing strategies are composed of action strategies or more complicated things um you guys are all nodding right now I know you get it right now I know some of you are going to lose it it's it's I don't understand why it's slippery but it is slippery it's it's not the natural way we think about strategy so just to maybe juxtapose that in ordinary language this is a Challenger you're working for this challenger or your journalist interviewing them and you say what's your fundraising strategy in ordinary language the Challenger could either give a game theoretic definition of the strategy an ordinary language the Challenger might say well my strategy is if the incumbent raised funds I will alone if the incumbent didn't I will but more likely especially if we already know what the income is going to do if you asked your the challenger what his strategy is going to be um he would say Snee wouldn't necessarily just say I'm not going to fundraise he'd say you know I'm above money politics so I'm going to rely on my the strength of my ideas and my grassroots support but you know what his strategy was and but more importantly that ordinary language response that he would give would just be an inaction okay so an ordinary language in the newspaper when we talk about strategy and politics I think it's also true when we talk about strategy and games and sports as well but a full strategy in a football game or something like that would have branches in the trees what are we going to do if they do this what are we going to do if they do that but in the middle of the game what's the strategy going to be on this play the ordinary language answer to that would be more like an action okay so because keeping the distinction between strategy and action distinct is a little tricky it's unusual for us it's doing something that we don't otherwise do um I think on your homework I've asked you to do some strategy counting and if I haven't I will let's count some strategies in this game okay this game is going to be kind of quick and dirty here it's not going to have a story we're just going to have player 1 here player 1 has three possible actions okay they are left middle and right how interesting is this story once player one makes a choice player two makes a choice player two regardless of what player 1 does player 2 can just choose left or right right okay so those are the and we're not going to solve this game so I'm not even going to put payoffs here but something that you sometimes might want to ask yourself about a game if you're trying to figure out whether you set it up right is to ask yourself how many strategies each player has okay or to enumerate all the strategies just to make sure they're all there okay so player 1 how many strategies excuse me question in the back three what are they left middle and right okay player 1 no problem player 2 what is one strategy the player to could play and as much as I don't like this format for writing down it's okay to call it out as an answer okay l comma L what actually sit see the whole thing see the whole thing let me not quite so you were thinking of L comma R okay there are games that would have that but in this game we've got player two's decision here player two's decision here so here's a strategy okay another one would be L comma R come out okay um how about this one another one all right do you guys see what I'm what I need to do here the strategy needs to tell player two's action for each node there are three possible nodes the player to could find yourself at okay so you can either find yourself at the node where player one played L we're player one played a more where player one played R and her strategy has to tell her what to do it all three of them okay so player two's strategies have three components okay one way that I find helpful um and to think about strategies is a strategy is something you could program into a computer okay you know if you've ever done any programming the frustrating thing about computers is they have no common sense they won't see a pattern you have to tell the computer what they have to tell the program what to do for every possibility that can arise they have no ability to think for themselves okay that's what a strategy does ok strategy really is something that you could program into a computer link okay so what I need here we go back to the definition of the strategy is an action at each decision node okay each decision node controlled by player two okay so again strategies whenever we think about strategies we have to think about which player kids because we have to know how many nodes will be included all right player two has I'm going to use letters here it could be at node a node B or node C if I wasn't already using numbers to represent the players I'd probably number these one two and three not necessarily part of a game but it helps me remember that all three of these nodes belong to player two so what I'm doing here is I'm thinking what is a set of actions for each node and the key here is player 2 v 2 has three nodes strategy needs three components yes there are more there are more yeah got a candidate for one our lr that's a good one it's not just those three anybody want to think about how many strategies there are total yeah eight eight is right how'd you get eight it's two to the third power that's exactly right the total number of strategies is the number of choices here times the number of choices here times the number of choices here two times two times two if we switching to black here added medium here okay so now I'm making kind of a very funky game one can choose left middle I guess middle not medium sort of the same thing well I can choose left middle or right if one chooses left choose two can choose left or right if one chooses middle one can two can choose left middle or right and if one chooses right - can choose right or left like maybe one choosing middle is actually paving a road that two can only walk on ephah one has already started it now what do we have in terms up how many strategies in that game twelve okay because we have two times three times two okay the total number of strategies is the product across all nodes of the number of actions at each node okay this looks weird doesn't it okay it you'll do a couple of examples if you feel like you're frustrated with homework example one of the first two to three problems in Dixit and skis those early problems in the back of the chapter have some strategy counting problems and I would recommend them I'm going to recommend some problems in Dixit and ski for you before the midterm anyway but certainly doesn't hurt to UM to start doing some now okay so stay tuned Thursday we're not done with this game yet we're almost done with it Bell
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_1_UCLA.txt
so syllabi should be going around where are they at this point okay so they're coming around slowly um I'm going to go through some logistical issues at the beginning um the first couple of things I need to talk about you don't really need to refer to the syllabus we'll go over the syllabus and then I'll give you something of a substantive introduction to what we're going to be doing um in this class the very first thing I want to talk about is um a set of questions regarding enrollment okay so some of you don't need to worry about this in particular if you are already enrolled in a section that you can attend and you have taken the math pretest that's been on the course website for some time now taken it and done fine on it you can read the syllabus you can relax uh you don't need to worry for a few minutes if one of these things is not true for you then uh then you need to pay close attention Okay um along with math pretest uh there's also been the the enrollment policy that's been posted on the website for some time I'm going to go over that for some of you who might not have already seen it uh the basic bottom line on enrollment is polyi 30 is almost always the first political science class to close okay I'm sorry about that I wish we could accommodate more of you the way we try to accommodate you is by offering it every quarter if you are not enrolled at all and not on the wait list I have to tell you your chance are not good of getting in this quarter okay the other thing I want to emphasize though is that the only way anybody who is not in ps30 gets into the class is through Ursa okay no matter how special you think your circumstances are no matter how unusual a story you have to tell me I'm not going to listen I'm sure some of the reasons that I hear from students about why they should have special consideration are true but there are so many of the that I don't have time to verify them okay so the best thing I can say is that if you're not in the class and you're hoping for a space to become available I'm not going to mess you up by giving pte to anybody I don't give pte the Tas don't give ptes the only way you will get in polyi 30 is by getting a spot through ersa okay so that's not a happy situation and as I say I'm sorry about it the political science department is doing its best to uh amarate that situation but it's kind of like adding Lanes to the 405 the more sections we add the more you guys come and I guess that's a good sign about polyi 30 uh in any case all right so if you're not in the class and you're thinking there's some other 11:00 class that I might have a better chance of getting into my feelings won't be hurt if you go check that out now it's probably the right thing to do there's another set of people that I've heard from uh this year and in past years who have enrolled in this class but are in a section that they can't attend okay you have two choices if you're in that situation you can either rearrange the rest of your schedule or you can drop the class okay given how many people want to take the positions in this class if you are signed up in a section that you truly can't attend you can't do the class okay so please if that describes you drop the section that you can't attend make space for somebody who can attend it okay do it now do it this week while you still have a chance to pick up a section that does fit your schedule okay you cannot get credit for the class by attending a discussion section different than the one you're enrolled in and discussion sections are required okay they are part of your grade okay I don't know how many people are in that situation I don't know how many of the people who are in that situation will respond by actually dropping the class maybe they'll be able to rearrange other parts of their schedule so that they can attend sections but one hope that I would offer those of you who are trying to get in the class is that maybe some of the people who are currently signed up in sections that don't work for them will free up a space that you might be able to uh to nap the other hope the other reason why somebody who is enrolled in the class might decide not to take it is if you don't have the necessary math background okay I don't want to oversell this you don't need a lot of math to do this polyi 30 presumes that you're comfortable doing algebra and everybody in this room who got into UCLA was certified as having been able to do algebra at some time early in your life the problem is some of you may have been certified as having passed algebra in eth or nth grade but might not be able to do it now and if you can't do it now you can't take this class okay the subject that we're going to be studying here the subject of Game Theory the standard tool for understanding strategic situations in politics and in other areas is based on algebra okay so if you can't do algebra and you try and take polyi 30 it's like trying to take an intermediate Jazz class without being able to play your instrument it's just not going to work okay and just as it's not too late for you guys to learn how to play an instrument and take the intermedia Jazz class in a few years when you're ready to it's not too late for you guys who are not able to do algebra to take either math 1 or math 2 or to take a community college class and refresh that material in fact if you take the math preest and find you can't do it I can't urge you strongly enough to do something about the algebra problem before you leave UCLA it doesn't get any easier to learn if you're not comfortable doing algebra you're are going to be systematically shut out of a lot of interesting conversations that you guys probably have something to contribute to okay if you're here at UCLA you're energetic in ENT engaged people but if you can't do mathematics at the level of algebra your other talents are not going to be used to the extent that um that you would like and the society would like okay so with all that wind up there is a short math problem posted on the website okay if you can do that problem if you can do it in five minutes most of you will be underwhelmed by the problem it is really not hard you should be fine if you can't can't do it or if you find that it really takes you a long time then you probably should drop this course brush up your math skills and try to take it some time in the future okay another way to approach the issue of the mathematics prerequisites is that when I write the exams I presume that the algebra is not going to be hard for you I presume it's not going to take you a lot of time I presume that you're going to be pretty fluent in it just as if this was a CL literature class in a foreign language I would presume that your ability to read the language would be would not be an issue okay so if you haven't looked at that math pretest please do that today the answer is posted also on the website in a different file so that you have time to answer the question and then check your answer and if you think there's going to be a problem again everybody is better off if you drop the class today you're better off you could find another class that you can take and somebody else is better off because they can take your uh your spot in the class the final enrollment issue at least I think it's the final one is what if you're on the wait list I am pretty sure that I'm going to be able to let in the weight listed students I will know for sure on Thursday but if you are on the weight list again if you're on the weight list for a section you can attend that's critical then I would be optimistic I can't guarantee it but it's looking good and I will do my best I will give you a firm answer on that on Thursday okay any other questions on enrollment okay okay I have one other comment I want to make about enrollment so if you're listening to me and your heart is sinking and you think gosh I really need polyi 30 to graduate go talk to Jim bant and Cathy es cabido in the polyi office those are our two counselors they're extremely good okay and the fact is nobody needs this class to graduate it's a great class if you can't take it this quarter I hope you could take it sometime in the future but you don't need it you probably have more options than you realize and if you feel like you're just backed into a wall or hopefully even if you don't feel that way you'll go in and talk to the counselors they really they can do you a lot of good okay all right okay I gave you kind of the nitty-gritty on the math prerequisites I want to say a little bit about um why you need math to do Game Theory why we use math in social science at all um I think sometimes there's a suspicion in students mind finds that uh there's a conspiracy among professors and probably the Tas are in on it too to make simple things hard and that math is a good way to do that um that that's a couple of people are smirking like uh they've had that suspicion or perhaps have a friend of a friend who has um I think that's actually exactly backwards math is not a way of making an easy thing hard it's a way of making a very hard thing a little bit easier okay the very hard thing is overcoming some of the limitations in the way we all naturally think okay the human mind is a wonderful thing we naturally do many many great things we learn to control our bodies we learn to use language we learn uh to get along in social groups it's it's wonderful okay and we're evolved to learn those ways using a style of learning that is inductive okay seeing patterns in the world seeing something happen a couple of times and figuring out how to mimic the pattern it's the way people learn language is sort of the obvious uh thing that we learn inductively it's the way of learning that comes naturally to us okay we don't have to learn it inductive learning has a relationship to inductive reasoning okay inductive reasoning is the idea of going from observing specific cases to making a general conclusion okay so I observe rain clouds in the west it rains in a couple hours I observe them again it rains in a couple hours by the third day I'm starting to see the pattern that's inductive learning okay and that's inductive reasoning going from specific cases to a general statement nobody's going to beat the ability to do inductive learning out of you it's yours you'll have it for the rest of your life it's great it has its limits though okay inductive reasoning is efficient it's um the source of much of our creativity and much of our Effectiveness the problem with inductive reasoning is that sometimes we're wrong okay we're limited by our experience okay I have a couple of bad interactions with somebody who has blonde hair and a cute Turned Up Nose and I decide that everybody with blonde hair and a cute Turned Up Nose is bad and shouldn't be trusted okay not good not correct I'm wrong there's lots of honest upstanding blond-haired cute-- noosed people okay inductive reasoning is naturally as we come by it has a flaw compared to deductive reasoning inductive reasoning comes naturally but is sometimes wrong deductive reasoning exactly the opposite deductive reasoning is hard we make mistakes if we're not careful but if we don't make mistakes we're not going to be wrong deductive logic is the standard by which we judge all statements all human statements all inferences okay we're not wrong if it's done right moreover we're in complete agreement about what is logically valid and what is not okay inductive reasoning we may be very different about what how we think the world works because the specific things we're generalizing from are different deductive reasoning we start from General assumptions okay we start from something that we're all willing to accept and D uce is that word specific conclusions okay if we follow the rules of deductive reasoning the relationship between our assumptions and our specific conclusions is not going to be wrong now our assumption might not be right okay that's something we always have to be aware of but if we're willing to accept the Assumption and we think deductively our specific conclusions will be true deductive reasoning is the very hard thing that math makes a little bit easier okay deductive reasoning is not what comes naturally to us for that reason we've discovered ways of helping ourselves do it using symbols that we can look at on a page there's where the variables and those algebraic uh Expressions that we're going to be using come in handy okay a process of writing down the steps in the reasoning is another aspect of um deductive reasoning that defines the unquestionable relationship between the assumptions and the specific conclusions along the way okay so so far what I've done is I've given a pitch for why mathematics is generally helpful not because it comes naturally to any of us it doesn't okay its specific Advantage is that it offsets the limitations in what does come naturally to us okay it offsets the purely logical limitations of reasoning inductively based on specific cases when we apply deductive reasoning to questions involving other human beings especially questions involving politics which many of us have very strong beliefs about very passionate feelings about deductive reasoning has a few other advantages okay some other aspects of human reasoning that come naturally to um virtually all of us they're not necessarily part of inductive reasoning but they are I'm afraid part of human nature one is an ability to pick up crudy markers one is a tendency towards wishful thinking okay it's another part of human nature and it has its advantages um some psychologists have done studies to um compare levels of wish ful thinking across people and the people who can evaluate things about themselves most realistically tend to be clinically depressed okay our tendency towards wishful thinking gets us out of bed in the morning it motivates us it keeps us going it's um there's a lot of good to it sometimes however when we're trying to strategize for example when we're trying to figure out what to do in a political situation if we can back off from our tendency toward wishful thinking step back and see the world just as it is not colored by how it we would like it to be that can be an advantage the mathematical symbols the abstraction that we're going to learn in game theory is a way to distance yourself from a problem where if the story is being told in terms of Republicans and Democrats and uh hot button political issues you might have trouble keeping straight how the world really is from how you would really really like it to be if we're talking about team X and team Y and a more abstract version of the same problem the Strategic elements can often be clear for us okay so again the mathematics that we're using here is helpful in the sense that it helps us offset a part of human nature that I don't want to malign in any way but that has its limitations okay the other limitation which actually is more intimately related to the idea of induction is a tendency to stereotype okay when we're thinking the way that just comes naturally to us we're remembering specific things that have happened to us and we're generalizing from it and we're not aware that we're doing so it's very easy to start stereotyping it's very easy to think you know more than you do based on sometimes a very small number of specific cases okay the deductive framework that we're going to learn in this class the deductive framework that is the basis for game theory is one that forces us to be clear about what our assumptions are okay now those assumptions May in fact be based on stereotypes but the fact that we have to say what they are makes it more likely that we ourselves realize that it's a stereotyped assumption or that somebody else can call it call us on it okay okay so the bottom line motivation why you guys as people who are generally interested in politics might benefit from taking a class that takes interesting Flesh and Blood politics and turns it into problems with variables to solve the reason is this process of doing so is going to complement your natural intuition your natural communication skills it's going to give you a different perspective for most people let me not even say that for some people it won't come naturally I was going to say for most people it won't come naturally and in doing so um I guess I would be illustrating this very limitation of uh inductive reasoning studying strategy does not come naturally to me I think I was generalizing based on that so I have very much benefited by learning game theory because I have to say my strategic intuition is not very good um you guys could win a lot of money uh playing poker with me uh you won't get the chance but it it could happen some of you may be like me and you may find that the Strategic stuff that you see people doing um it's really nice to see it come clear uh when we step back and analyze it in a more abstract way that might happen to some of you I've known students though I've had students in this class which I've now taught for almost 10 years um who are actually very good intuitive strategists and many of them have also really enjoyed the ability to stand outside themselves and see how what they do naturally and being sort of natural political types I'm sure there's some of you in this class to how that is part of a general system and moreover how other people do it too okay some of you may in fact have both experiences in this class that when we'll be going through an example you may sometimes have the experience of thinking this is so down the rabbit hole it just doesn't make sense where are we going with it and um okay so that's one perspective the other perspective is this is so obvious how could it be otherwise okay and that's uh to me that is a symptom of what I've been saying so far that the artificial abstract way of studying strategy that we're going to learn in this class compliments our natural way of thinking and sometimes it does it by creating these aha moments where things that haven't seemed to be similar to you will all of a sudden fall into a broad category and you see how they all have a common logic other times it will see you will encounter cases where your natural intuitive way of approaching it perhaps based on wishful thinking or stereotyping we all do it it isn't actually right and when you solve out the game you think oh I should act differently or maybe not okay so we're kind kind of getting ahead here uh uh but I hope you have I hope you have both of those moments in this class I both hope you both have moments where you think that is so backward from the way I thought it would be and then I hope you can go through and see how the backward conclusion actually does make sense you look at it and I hope you also have the other kind of aha moments of oh yeah I always knew that's true and now I understand why okay so that's a that's a big preview before we jump in and I start to deliver on uh these promises um at this point I do want to go through the syllabus okay so at this point I hope they I'm seeing them in the back row if you came in late and you didn't get a syllabus I only have the one up here but they are up on the course website um so maybe one thing for me to there's some back there actually um Adam do you mind grabbing those there and uh raise your hand if you need a syllabus we have Adam Fowler one of our ta uh helping us out here okay so course website my course websites get really full I post lecture notes on them everything I hand out in class I will post on them I will post homeworks answers extra problems on them okay so this is a course where um you know bookmark the course website get used to um get used to checking it past years have made very very good use of the discussion board on the class website okay so if you have a question about something that happened in class something on a homework something that your ta said in section rather than just emailing us privately please post it on the discussion board the taas and I check the discussion board frequently okay and so it's one reason to post it there is we'll post our answers and everybody who has a similar question can benefit from it the other thing that happens sometimes and I love it when this happens is one student will post a question and other students will post the answer and they'll be right or they'll be partly right and they'll get to a right answer and that is so good for everybody involved okay that kind of processing that kind of responding to somebody else's question thinking about whether another student is getting it right there is no substitute for that kind of active participation that's really going to help you internalize the skills that we're learning in this class okay so use the discussion board check the course website this year here for the first time the course is going to be webcast you guys have probably noticed our friendly cameraman back there who's um going to keep me in line if I do anything I'm not supposed to do I hope uh the podcast of the course will appear within a day or so up on the bruincast website and I'll put a link to that on the um the main course page as well I was walking over to class with Barry O'Neal somebody else who teaches polyi 30 and I told him that we were going to do webcasting and he said oh wow um and then he said well aren't you worried the students won't come and it's something that I thought of it's one of the things that when professors talk about oh should you web fast your class everybody always says that and then the conversation goes in a couple ways one thing is is it okay if they don't come I guess so if you think it's okay I wouldn't advise it though and I wouldn't advise it in part I'm speaking from induction here but my experience is that even trying to learn something from a good video is not the same as being in the room and being part of the process um I I say that from my experience because I'm I'm not very good amateur musician and I do I'm an avid consumer of instructional videos and I put them in my VCR and try and do what the guy is doing and you get something from it but it's not the same as like going to a workshop at maves or something where he's actually there and I'm not a psychologist I don't quite understand why that is but I would wouldn't advise just relying on the breu and cast videos they are there if you have to miss class they're certainly there if in the rush at the end of class sometimes I talk really fast and you don't get everything there to to check to reply okay so that that's one way the conversation goes we wonder will the students still come and then we wonder what would happen if they didn't would it be okay maybe it would the other way the conversation goes when somebody who has typically done webcasting is in the room is that students usually do still come okay so I would take this as evidence that you guys feel the way I do that there is something about being in the room especially in a class like this where you're not learning facts you're not really learning a body of knowledge you're learning a style of analysis you're expanding your way of thinking I'm trying to teach you a skill and I will be using language metaphors a lot okay I'm going to be teaching you a lang language that you can use to analyze strategic problems and to communicate your analysis the Strategic problems trying to learn a skill like this again I don't know why but it just seems to me that people do better being in the room watching the human being do it uh another variation on this spoke knowledge that I seem to believe in so strongly even though I don't quite understand the basis for it is um some of my other classes I've transferred to PowerPoint this one I don't do PowerPoint because I think this is a very visual subject and I think there's going to be something about you guys watching me do the diagrams on the board they're the same diagrams I want you guys to be able to do that there's something about watching the person do that in real time that gets lost if you just see the finished product okay okay so I said I was going to go through the syllabus I didn't get past the link to the course website okay um right the first the beginning of the syllabus is a little bit about Game Theory what I mean by strategy um as background to this let me tell you a little bit about the history of this course uh the number polyi 30 is actually old older than the title politics and strategy and when I started uh teaching at UCLA that was fall of 1991 polyi 30 was a very small course on the books it was taught once a year 30 40 students took it it was called introduction to political economy okay and I will in a little while maybe on Thursday talk about political economy and how it's different uh from regular economics but the course was basically an introduction to mathematical modeling and political science and it was not a bad course but I I say it attracted a small number of students and it was a survey of topics in political economy some of it had to do with the politics of international trade the politics of um macroeconomic policy some kind of traditional econ type topics that there was some um analysis of markets from a political perspective and there was a little bit of Game Theory okay other people were teaching the course at that time I wasn't one of the teachers of it and what they started noticing was that students really liked the game theory part of it and so the game theory part of the class started getting bigger and bigger and crowding out all these other topics and uh somebody was teaching it before me Mohan penart just said at one point forget the other political economy I like Game Theory the students like Game Theory that's what I'm going to teach and when that happened demand for the class just Skyrocket it okay that's when the class went from this little dinky fringy class uh 30 40 people to what it is now um 500 or so students take ps30 every year and I think more would if they could um students seem to vote with their feet for the game theory class and uh around that time I said oh me too I want to teach it because teaching a game theory class sounded really fun to me as I mentioned earlier I think a lot of it just has to do with the strength of the topic strategy is interesting to people many people who are interested in politics are interested in the Strategic aspect of it right now many people who don't normally follow politics are following the primaries because there is that strategic who's outsmarting who who's ahead who's coming from behind who thinks they're ahead but isn't really ahead we're just a lot of people find that interesting I'm in the category of people who find it extremely interesting so I think one advantage of having a dedicated Game Theory class as opposed to a class of um selected topics in political economy is why not just stick with the best topic the other advantage AG is When I Was An undergraduate I guess would probably still be true now when I take a class I like to feel like I've mastered something I like to feel like I've gotten to a plateau and that's my goal for you guys in this class you're certainly not going to learn all the game theory there is to know you're really in many ways just going to scratch the surface but I want you to get to a plateau where by the end of this term I hope you're reading the newspaper differently okay I hope that as you're going through especially reading about politics you're seeing things from a different perspective and not necessarily just reading the newspaper the idea in Game Theory the idea motivating this abstract deductive approach to politics is that there really is a common logic to all kinds of politics okay so the really high level Carl Ro versus James Carville um super strategy that goes on between professional politicians isn't that different than the kind of uh downto Earth dorm room politics family politics office politics that we all engage in okay that that that's that idea is very much at the core of Game Theory and you'll see that the examples that we do some of them will be about elections foreign Rel ations Wars things that we traditionally talk about in terms of political science others will be about roommates families neighbors other situations where people want different things and they're affected by each other's actions okay what I just said there I'll say again because that is the definition of a strategic situation we're in a strategic situation if what you want is different from what I want some way there can be some overlap doesn't have to be completely different but we have some difference in goals and we can't independently control our own outcomes what you do affects the things I care about what I do affects what you care about when we're in that situation where you're doing things that affect the things that I care about if I care a lot I'm going to strategize I'm going to try and get you to do things so the outcomes will be good I'm going to try and think about ways to change your behavior to anticipate what you might do to put you in a position where the best thing that you can do is the good right thing for me what makes it interesting is you're going to be doing the same thing to me and I know you're going to be doing it and you know I'm going to be doing it and that's the kind of situation that game theory can help us disentangle this kind of infinite reget where I'm trying to outsmart you you're trying to outsmart me knowing that I'm trying to outsmart you and it just goes on and on okay when you think about it that way it seems a little bit strange um and it will seem that way at times too but we can we can manage some of that infinite regress okay so it's a game theory class lectures are here from 11 to 12:15 there's a book I may be the only one of the three people currently teaching polyi 30 who uses a book um maybe I shouldn't tell you that see I told you I wasn't very good intuitively at strategy I strongly recommend the book okay I do this even though I know if you go on Brew walk talk to other students you will see people who will say oh yeah you don't need to read the book the book is confusing she says everything that's in the book just go to the section it's true people say that I don't think it's the a students that are saying it though it is true there is nothing that you need to know in this class that I won't do in lecture okay there are with I think two exceptions nothing no things that you need to know that are not in the book too and it's really all meant to be the same thing but remember what I said a little while ago is what I'm trying to teach you here is not a set of facts it's a technique it's a way of doing things things and when you're trying to learn a technique it is very very helpful to have more than one explanation okay now you're going to have that you're going to have me you're going to have your Tas and discussion section and just we're going to naturally do the same basic processes a little bit different and Dix and skith do the same basic processes a little bit different and that initially is annoying right initially if I'm drawing a game tree vertically and they do it all left to right in the book well I hope that that doesn't hang you up but there'll be other differences that you might find annoying but the process of internalizing the technique is one that will work better for you if you can see the differences and the commonalities in the different presentations okay so do get the book I'm sorry it's expensive it's a very very good book though and it will help you very much when you get the book go through it notice how they do they'll use sometimes different words sometimes the processes will be different notice what can be different in the processes and what Remains the Same because once you've mastered that once you figured out what the core things are what constitutes doing it right versus a mistake versus just differences in style then you're going to be able to use Game Theory yourself okay so then you're going to be empowered to read the newspaper better Psy at your roommates better and do well in the final okay so I strongly recommend the book the book has a lot of problems uh that and I will give you guidance closer to the midterm about problems that will specifically help you let me um I do have things I want to say about the middle of the syllabus but while we're talking about the book let me direct your attention to the last page of the syllabus okay um I've gone to some effort here to give you uh suggestions for using the book efficiently I guess the book is dance we're not going to cover the whole thing and it's not that huge of a book um but what we do cover you really really need to know okay what we do cover is not going to be facts like I say it's going to be ideas that you will internalize and I hope take with you for the rest of your life the first topic the topic that we're going to start on Thursday is sequential games and sequential games are mainly covered in chapter three of the book we're going to spend a long time on chapter three okay so I do recommend that you read chapters one and two before Thursday chapters 1 and two are not representative of the book you can read them propped up on the couch late at night okay they're they're easy chatty read fine do that the rest of the book beginning with chapter 3 is stuff that you're going to want to read at a desk with pencil and paper working through the logic as you go okay so once we get to chapter three the pace slows down well I Pace in terms of pages slows down in terms of material it's actually going to ramp up okay for each of the me topics that we're covering um I've tried to give you a sense of what chapters are relevant and in some cases like if you look under mixed strategy equilibrium there are some topics that we're simply not going to cover I think they're a little too advanced for an introductory course I let you know about that okay um I'll try to remind you when it comes time to study for the midterm and the final that there's some um instructions here on how to study how to use the book okay last thing on the book there is an there's a first edition of this book it's yellow and blue it's really different okay don't get the first edition the second edition has been out long enough that you may well be able to find a used copy of it more power to you if you do but don't don't get the first addition the chapter numbers are different it's just not uh not the same book okay um bottom of the first page back to the front of the syllabus the grades the grading is based on participation in the discussion section the Tas I told the I've given the Tas permission encourage them to set their standards for participation grades based on their own teaching style okay so the Tas will have different rules for participation in section that's that's good the Tas are different people but they're in a good position to decide what kind of incentive system for discussion system for discussion section is going to best fit their teaching style so the um discussion sections this week will be the place to find out about the participation part of the grade you will have five sorry you will have six homeworks five of which count all right the first homework you'll get a week from today they are um one way to think about the homeworks is they are translation exercise exercises they're all going to have the form of a little story about some kind of political situation and a bunch of questions about it the questions they're going to vary in the details but they're all going to be of the form translate the scenario into a game solve the game then figure out what the solution of the game tells you about the real world political problem so I think about it as translate the real world problem into Game Theory do your game theory thing find the equilibrium we're going to get to all of that on Thursday already then interpret Your solution okay so take something that's specified in ordinary language with its imprecision and its vestages of inductive logic wishful thinking all the the stuff that we're just plagued with as human beings translate it into a deductive format do some analysis figure out what you predict is going to happen then interpret what you find in the game okay the homeworks are graded on effort okay um for for I don't quite get this is a class that people have different learning curves for some people get it pretty easily right away some people have a couple weeks of really being lost and then they get it okay some people don't get it but I almost always think that that's because they didn't read the book or they didn't come to class or one of those things it's really uh and there's not that many that don't what I what I'm saying the reason why I'm bringing this up it might seem a little a little bit impolite is that there is this spell around the first two homework problems especially where a good number of people who will end up doing well in the class can really feel at SE with it where you can it all seems like it makes sense when she does it on the board but then I can't do it by myself it's it's natural okay and you just have to bust through that um that hard section there that's one of the reasons why I grade the homeworks just based on effort okay you got to try you got to be willing to try you can't be afraid to fail on the homework so the important thing is to just work hard on it so your grades on the homework will be um uh just indicate whether your ta thinks you're doing a decent job or not okay the other thing about the homework is you're allowed to skip one okay this is not exactly meant just to give you a free pass what it's meant to accommodate is the possibility that you'll have an emergency or something you'll be out of town you'll get sick you'll have a family problem I hope nobody has any serious problems during this quarter but if you do you have that homework that you can skip okay it's meant to be kind of an insurance policy there okay what this means is don't decide to skip the first homework okay okay that it's uh you want to have that option available the other thing it means is you don't get two okay if you really have such a serious problem that you can't turn in one homework and then you can't turn in another you're probably going to have to drop the class okay so don't come to me don't come to me I'm emphasizing this now and say well I know you let me drop one of the homeworks and I dro already but the reason why I have to drop this other one is a really really good reason no the really really good reason is why you dro the first homework many people every year do all sex homeworks and it's not a bad thing the way the what the homeworks really do is they prepare you for the exams okay I just gave you the little algorithm that I use for all the problems they're all translate the story into Game Theory solve the game translate it back into something about politics same format for the exam questions okay the exam questions are maybe not as tricky as the homework problems because you're working under time pressure but they are exactly the same format and the way to get good at them is by practice okay so five out of six homeworks there and then the rest of the grade are these in class exams format is the same as uh the uh the problems okay maybe one other warning thing about the class the midterm will certainly cover everything in sequential games depending on how fast we go sometimes it has a little bit of simultaneous um games on it or not something that I don't like about the course but just don't seem to see any way around it is the material that we cover for the midterm is easier than the material we cover after the midterm okay so that puts you in kind of a tough position okay grades on the midterm are usually higher and more compressed at the top end than grades on the the final exam um I don't know what else to do it doesn't make sense to teach the hard stuff first right I wouldn't be able to anyway the stuff that we do early in the class for the midterm is the basis of all that harder stuff so maybe just uh putting it out there as a warning to you that uh don't you don't want to back off after the midterm if it seemed easy things get a little bit trickier afterwards and it's just the nature of a very cumulative subject which this is okay um homework score we can drop one oh the other just to emphasize about the system of dropping one homework it means that like no no late homeworks nothing like that the whole idea of dropping one homework is to simplify all that kind of bureaucratic stuff for your ta so they don't have to keep track of who's got a valid excuse to be late or not just don't be late if you're late with the homework that's the one that we drop don't be late again okay so homeworks are due in class the due date will be um always posted on the homework the Tuesday after they're handed out so it's a Tuesday to Tuesday homework work uh schedule um grade disputes the how the grading Works your on ta will grade your discussion section your homeworks and your midterm okay so one point about that is that the Tas may use different grading scales this has really never been a big problem in the past except that people really worry about that oh my friend's ta is grading much easier than mine it's so terrible what I do at the end of the quarter is I take the part of your grade where different Trias are grading different students and I do just standard T tests that you learned about in your statistics classes I see if the null hypothesis that they're all from the same distribution can be rejected and if it can I renormalize okay so if the Tas end up grading differently what I will do is I will put them on a common scale at the end most of the time they're not most of the time there may be small differences but they are you know statist they're not statistically significant okay um the final exam will be graded horizontally as they say so one ta will grade the whole question the whole classes question one another one will grade question two Etc so that issue doesn't arise with the final exam okay um grade disputes this would really only arise on the midterm and it has not Arisen uh that much recently um if you on your midterm see your grade you think you have one of these uh natural reactions that may involve wishful thinking stereotyping and a good bit of emotion that it just wasn't the grade that you wanted expected think you deserve um the process for dealing with that is as follows okay you should wait 24 hours before disputing the grade just need to see if it's just disappointment or if you really there really was a problem with the way the um exam was graded but don't wait more than a week okay you have to do it while it's while it's fresh and write up the reason why you think your exam was graded wrong okay it has to be valid reasons there has to be something that you can say look this is what I wrote this is right it's not wrong you cannot say things like or you expect to be Tak it seriously you cannot say I always get better grades I think I should have had more partial credit you're just not in a position as a student in an introductory class to have very good judgment about that but if you could say something factual about the way the um exam was graded that could persuade somebody to change your grade you can do it in writing and you can give it to your ta okay you cannot pester your TA in person most of you guys wouldn't do it okay the few of you that would though you could take down the whole class okay so no disputing the grade in person if your ta thinks that you're arguing with them about the grade you will have effectively given up your right to dispute the grade if you do the thing of handing in the written uh grade dispute to your ta they'll respond in writing and you're still not happy you can give it to me okay I will regrade the whole exam though and you got to remember the TA see the other exams so they kind of know how the class as a whole is doing I don't do that okay all graded by objective standards okay so if you think that you have something that is OB objectively right that was marked wrong bring it to me I'll take a look at it but I'll regrade the whole exam and if I think your grade should go down I'll mark it down and I have marked grades down in the past that might be why these disputes haven't been coming up recently okay so just thing to remember there is only dispute your grade in writing only dispute it based on correct statements about Game Theory okay not judgment calls and if you want to bring it to me accept the idea that your grade could go down okay okay um last thing happier topic study groups just as I was saying about the discussion board being a good idea because if you can engage each other's questions that's going to help you learn to be fluent in Game Theory study groups are another very very good way to get fluent in Game Theory to learn to talk through this is what's really going on in the problem this is how I would represent it in a game no maybe I would represent it this way that kind of give and take in a study group is really really helpful okay so I encourage you to work with study groups even study groups for going over the examples from um from class you'll pretty quickly even by the end of next week get a feel for how you could take an example that I do for you in class and make changes to see how the answer changes that's a really good group activity and the the process of doing with other people is very very helpful what you don't want to have happen though is you don't want to become dependent on your group right the homeworks are a pretty small part of your grade and what they're really intended to do is prepare you for the midterm and the final and if you're not prepared to do the midterm and the final without your study group there's going to be a problem so here's my advice on how to avoid that Pitfall make sure that you and everybody in your study group is bracketing the study group sessions with solo sessions both before and afterwards okay I did this throughout college and grad school and it really is a good Norm for study group so the idea is before study group meets everybody will have gone through the problem that you're going to talk about or the set of lecture notes that you're going to talk about on their own and have questions or a start on a solution or something like that then you meet the group works on it together and then everybody writes up the solutions alone okay if you find yourself at the end of study group madly copying down something that somebody's done on a board or on their other paper and you're doing it because you're under time pressure you're over a line there okay you need to write up your homework problems yourself okay so strong Norm remind your your study group peers of it that they should be writing up their final things on their own you should write up your final homework solutions on your own that's the way you'll learn it in a way that you'll be able to do on the final exam okay it's a lot of logistics questions okay so I'm going to say a little bit about um how the study of strategy fits into social science in general um and then on Thursday we're going to jump right in with uh with Game Theory okay um so I said earlier that the course used to be called political economy and um as I may have indicated when I talking about that old course political economy um is a very old term it's actually older than either the term economics or political science and as you might guess it's been around a long time it's got a lot of different meanings and ideas associated with it um the when people say modern political economy they mean the kind of political economy that we would be doing in this class the kind of political economy that uh Game Theory looms large in and what they basically mean is a way of understanding the world that uses the methods of Economics especially the methods of microeconomics okay which I'll say more about in a minute to understand behavior that doesn't necessarily take place in markets okay so if you want to know what's the difference between political economy and traditional some people might say neoclassical economics it's the emphasis on situations that are markets buying and selling with lots of buyers and sellers and investing and producing and those kinds of things versus other kinds of activities that don't take place in markets conducting elections voting in legislatures bargaining threatening fighting all of those kind of things can be UND stood with the tools of economics and when we apply them outside of markets we're getting into the range of uh topics that people would more likely think of as political economy when I first started teaching this course I would say things like you can use economic methods to study things besides markets and people would give me this weird look and I think that's less true now probably some of you guys have read the book fre economics or checked that blog sometimes the idea that you can use the um economic method which we'll talk about in a minute to understand all sorts of phenomena not just in politics but in um every aspect of life has kind of bubbled out in become part of uh the Norms of our age now so that doesn't seem so strange okay the to me the key issue in political economy and in the approach that characterizes economics is that there's this emphasis on conflict if economists are drawn to situations where we see markets you can actually think about markets as one way to resolve conflict conflict over who gets what well we can determine it in a market where contracts are enforced political economy is a sowhat broader area where our attention is drawn to conflict of any type remember I said a little while ago that a strategic situation is one where people want different things and both their actions affect the outcome the sort of conflict implicit in that okay so the idea the ground assumption that's going to permeate this class that I won't be saying week after week although I'll be talking about many other assumptions is that conflict is inevitable maybe inevitable is too strong but conflict is not going to go away okay that the fact that I want something different than what you want is not going to change there's no superm mediator who's going to sit down with us and get us to the same point you want something different from me okay I want to keep my banana plantation sorry about that I just spamed the microphone that's going to sound ugly I want to keep my banana plantation you want Land Reform so that you can grow food of your own you want to conduct business in your native language I want to conduct it in mine okay you want to let your hog farm waste go into the stream I want to run a river rafting service in the same stream okay there's just some conflict that's not going to go away there's nothing that can be done to make the river raft operator want to have pig farm waste in the Stream and there's nothing that's going to make the pig farmer want to have to do something else with this pig waste he's been doing it for centuries stream is there for okay so the approach of political economy is one that takes conflict is given okay in economics broadly understood when I talk about the method of Economics the method of microeconomics the way this same idea gets expressed is that people have fixed preferences is what we'll often say synonyms for that could be goals values people want what they want you're not going to change their mind okay this is very different from the other big dominant force in social science which is psychology okay so the way I think of social science as a broad Enterprise is that psychology and economics are really the two big beths and if you've taken either psychology or economics classes I think you'll know what I mean they have you know big thick textbooks they um permeate the other disciplines and the smaller disciplines political science and sociology in particular tend to borrow a lot from psychology and from economics okay so the idea that people have fixed goals and preferences that stay the same that are stable that the environment doesn't change psychologists would disagree with that very much okay so a psychologist approaches situations of conflict very differently from an economist psychologists tend to be much more interested in changing the preferences that lead to the conflict okay that's not what we're going to do in this class but I strongly urge you sometime where you're at UCLA to take a psychology class or a class that uses psychological approaches okay because the do think there are some conflictual situations where changing preferences is a way out is a Way Forward is something that should be thought about um uh kind of obvious example maybe less obvious to you guys but um because I'm older obvious to me would be the um attitude people in America have toward drunk driving when I was a kid it was like something that people joked about and wasn't uh all that taboo at least in my hometown people did it a lot even though there were deaths and things like that it just was considered you know kind of inevitable too bad we have cars people have to drink you know and then they have to go places and yeah that's how it was and you really it's it's not good at all and somehow we've gotten to where we are now in 2008 where I say that and you guys laugh a little bit nervously and I'm almost a little bit uh sheep about bringing up an example that you know has horrible consequences drunk driving has kills people it's just an awful thing and somehow people's values and goals and even preferences about that have really really changed okay economics is not going to explain that okay but there are other examples where I was giving you some a while ago about conflict over resources conflict over the type of we live in where preferences really don't seem to change where there's no good idea out there for uh raising Consciousness in the um the case of the drunk driving example and those are ones where even if it's not realistic we'll probably understand a strategic situation better by thinking about goals and preferences being fixed or another way to say that is if we just focus on changing the values and the preferences of the people that were in conflict with we're probably going to lose we'd be better to try to outsmart them those are the situations that we're going to focus on okay so again I don't I don't want to make you think that we should always be strategic that we should always assume that we can't convince our opponents to join us um or that even that would be or even that that would be a non-strategic thing to do but it's not always a strategic thing to do okay sometimes outsmarting is uh let me put it this way sometimes trying to outsmart the opponent may not be the best thing but it's what people are going to do okay and that brings me to I think the last thing I want to say today how this all fits into social science for much of the class for many of our um examples I'll be expressing game theory is advice something you can apply in your life a way to get your roommate to pick up his socks this kind of uh thing that particular example I have some good thoughts on um so sort of a normative way of looking at Game Theory and Game Theory is used that way um in NBA Pro programs there are classes in management strategy and marketing strategy that uh try to apply Game Theory to particular problems that people will face in um military situations would be another example where strategy really does get applied to problems of the form how do I outsmart my enemy I'm going to sometimes explain Concepts that way to you guys because um you tend to perk up when I do it it seems a little bit more interesting but what I'm hoping that you will not lose sight of is that game theory is also a way to do social science a way to understand how the world is okay remember that's what I have erased now my inductive versus deductive um jux position but one of the problems that I was saying with our natural inductive way of thinking is that we see the world as we'd like it to be not as how it really is what we are going to learn in Game Theory and what helps us distinguish whether the psychological model or the economic model is better for understanding a situation is we're going to we're going to start with the assumption that people do have fixed preferences and we're going to see how they act when we see how they act in the model we can then go out and see if they really act that way in the real world if they do that gives us a sense that maybe we understand what the stakes are maybe we understand how to change the situation to get to a better outcome okay if we use this method set up a game assuming fixed preferences and it doesn't match the real world that's part of the scientific method right you see if your theory matches reality and if it doesn't you change your theory okay so we'll do that in the class to and changing the theory can sometimes mean staying within the domain of Economics maybe preferences are fixed but we just didn't get them exactly right maybe people want something different than what we were assuming or maybe people have some options different than what we're assuming that's one possibility or a stronger one would be that this is an area where preferences aren't fixed where a psychological model would um would be more helpful okay so that's it for today Thursday I'm going to jump right in to do a game and some of this I think will become a little more concrete
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_14_UCLA.txt
okay this is a more ambitious outline that I think we're going to get to today but it is maybe a good point here beginning of 8th week to look ahead touch base with where we're at and where I think we're going to go the rest of the quarter realistically most of today is going to be spent on topics 1 and 2 I have one more thing I want to say to wrap up our discussion of mixed strategies and mixed strategy equilibria I have a fairly small point that I want to make about representing sequential play sequential games in normal forme okay so using a game that has sequins but representing it as a matrix of payoffs most of the time as I've said you won't want to do this but I am going to show you how to do it because it's going to help me make clear a point I've been hinting at all along which is that Nash equilibrium and rollback equilibria are similar they're intimately related but they're not exactly the same thing okay um the last major topic that we'll cover in the class as repeated games and that's something that will do hope that we start at Thursday it'll take us up to the end of of class that's our last major topic where we are in the book is um we've been doing mixed strategy equilibria which sticks it in Skeets spread out over chapters seven and eight okay so you should have gone through chapters seven and eight at this point if you look at the syllabus there are some sections in section 7 in particular that our topics that we're just not covering so I let you know what what sections are not relevant there this bit that we're going to do at the end of class today on sequential play in normal forme this is chapter 6 so I kind of went slightly out of syllabus order out of book order here um most of chapter 6 is easy stuff chapter 6 is armed one of the other ones I was going through one of the easiest chapters chapter 6 does not introduce a lot of new material okay so if you're feeling like you'd like some reinforcements for the topics that were covered in chapters three and four chapter six ties a lot of it together okay so there's some sections in chapter 6 that I won't go through but that nonetheless will help you solidify what you've learned the only part of chapter six I am going to do in class is going to be the topic of sequential play in normal forme okay so if you were thinking gosh it seems like she didn't do much chapter six so far you're right what I am going to do is going to be today okay when we get to repeated games losing touch a little bit with my memory of the chapter numbers i'm pretty sure that's chapter 11 our treatment of repeated games is going to be exclusively about the repeated prisoner's dilemma but you'll see even with just focusing on that one game there's a lot of a lot for us to do okay final preliminary remark is that I'm going to have to change my office hours again today but not as much as I did last week today I'm going to move my office hours up one half hour they'll be back to the same on Thursday and then I think my Tuesday offices office hours will offer also settled a on Tuesday 122 instead of 130 to 230 I'm going to have to leave it to but I'll make sure I'm there by one okay so come early if you need to see me and you can if you need to see me after to send me an email or come on Thursday all right so moving that up this down first thing I'm going to do is put our Cemal version of cops and robbers up here I'm going to do it quickly but we need it up here because we're going to need to compare it to some other stuff okay so this is cops and robbers version 1 the version that we worked on all last week we have cops they could be on the beat they can be in the donut shop we have robbers here it can be working it can be at home payoffs to negative 5 watch me on this I've messed it up in recent memory make sure I'm getting these payouts right here- 5550 um as we discussed in great length last week this game has no equilibria in pure strategies it's no single cell where neither player would have regrets it has a mixed strategy equilibrium where the cops are on the beat with probability one-half there in the donut shop with probability one-half the robbers are at work with probability for 11 so I'm using the same variables as I used last week it's our review they stay home with probability 7-elevens so the probability that we're in this cell is for over 22 saying here probability we end up in either of these cells is 712 so the last thing I want to look at is I want to use this game I want to change it to ask some questions about what does it take in this game to reduce the amount of crime okay maybe one thing to do before we switch to the new game is to ask ourselves what is the probability in this game in this equilibrium that crime is committed in equilibrium what fraction of the time should we see crime eight out of 22 um yeah that's right what Kenyatta said is what you do is you added the probabilities in these two squares right okay so there's two of these squares that correspond to the robbers committing crimes it happens here happens there I did a pause because what I had done is I just looked at the number up here it's either 8 20 seconds or that reduces 411's but this is the probability the crime occurs this is the kind of question that falls into the category of interpreting a game it's the kind of question we asked ourselves after we found the equilibrium because we need to know the equilibrium probabilities to calculate that okay so 8 out of 22 days four out of every 11 days we're going to see crime sometimes the crime will be detected sometimes the crime will be punished sometimes the crime will be gotten away with but right now we're just thinking about crime in general that's the probability that it happens in this game so we might ask ourselves is what is the effect of worst punishment okay that you wanted to tour crime one thing that will sort of leap to mind is punish those bad guys more they get caught committing a crime really give it to them okay so what I'm going to do here is I'm going to put a new version of cops and robbers abbreviating here cops and robbers version 2 here where all the payoffs are going to be the same except for this one ok this is the punishment ok this is the punishment this is what a bad thing that happens to the robbers when there committing a crime and because the cops are on the beat they get caught okay so we're gonna make that worse we're going to double their punishment give them negative 10 instead of negative 5 everything else is going to be the same two and make sure again audit me that I am actually copying these numbers correctly okay so now we've got a different payoff here one different pay off we're going to have to look for a new equilibrium because it's a new game we change a payoff it's a new game so let's look for the next strategy equilibrium as before I'm going to let Q equal the probability robbers or at work same as before and that's actually I'm starting with this because this is the one we care about right this is the part of the next strategy equilibrium that's going to tell us what the probability of crime is as before we find Q by finding the value of Q that is going to make the cops indifferent between their pure strategies and let me the interest of completeness put the cops two strategies up here again okay so cops can be beat they can be at donuts robbers work or home so doing this right the way I did it before to find the robbers equilibrium probability i look at the expected utility of the cops expected utility to the cops of being on the beat is find the cops and i'm on the beat my expected payoff is to the payoff i get from when the robbers are at work times the probability the robbers are at work plus one the payoff i get when the robbers are at home and i'm on the beat times the probability hi cleaning this off as i did before to Q minus Q it looks like Q plus one expected utility to the cops of being in the donut donut shop expected utility of playing this strategy as a function of the robbers probability is minus five my payoff if the robbers are at work times Q the probability of the robbers will be at work plus five my payoff from being the donut shop when the robbers are at home 1 minus Q okay it looks like 5-10 q just looking familiar to you guys yeah but I say the punchline it's the same numbers okay and it should be the same numbers right this is the same exact if you go back and look at your notes when we first solve version one we went through exactly this logic we wrote down exactly these equations we're doing it right both times because the robbers mixing probability depends on the cops payoffs and I haven't changed the cops payoffs okay in terms of the cops payoff version 2 is the same as version 1 the probability of crime is still going to be 4 11 I've just changed the robbers punishment I haven't changed anything to do with the cops ok and the equilibrium probability of the robbers doesn't depend on the robbers payoffs I've been emphasizing that all last week because that's weird right that's the strange thing about mixed strategy Nash equilibria and what I'm showing you here is it's not just a bit strange in an innocuous way it gives us a counterintuitive result ok when you're in a situation where players are playing mixed strategies and you change one players incentives you're not going to change his probability of action ok now you are going to change the other players probabilities ok so let's go through the other half of the Nash equilibrium calculation here the robbers payoff is the same but let p equal the probability the cops on the beat same as before now p has to be the probability that makes the robbers indifferent between their two pure strategies ok so looking at that the expected utility to the robbers of being at work is now we've got a new number here ok this is this is the number that's different between the two games it's negative 10 the payoff I get when I'm at work and the cops are on the beat times that probability that the cops on the beat ok plus 5 the payoff I get when I'm at work and the cops are in the donut shop times 1 minus P ok so prettying that up now I'm going to get 5 minus teen p expected utility to the robbers of being at home this remains the easy case i get 0 either way here so it equals 0 so setting these two equal just as i did before because the only way the robbers are going to be willing to play a mixed strategy the only way a mixed strategy is going to be an equilibrium for the robbers the only way the robbers won't regret playing a mixed strategy is if neither of their pure strategies are better than the other one these two expected utilities aren't equal then the robbers can do something better than flip a coin if the expected utility of being at work is higher than expected utility to stay home and I'm the robbers and I flip a coin I'm not in equilibrium because I would have done better by picking the better choice I'm only going to be willing to do the mixed strategy when these two are equal okay so let's set them expected utility to the robbers of being at work equals the expected utility to the robbers of being at home just saying what i'm doing and now plugging in the numbers this expected utility of work 5-15 p has to equal zero sounds like p equals one-third okay so um the news gets worse I'm increasing the robbers punishment it doesn't affect the amount of crime that's happening at all okay it doesn't affect the probability of crime and it makes the cops less likely to be doing their job okay over here the cops were on the beat with probability one-half now in order for things to be an equilibrium the cops are only going to be on beat one third of the time and we're going to spend two-thirds of their time in the donut shop the robbers are doing the same amount of working and staying home so the amount of total crime is undetected is unaffected the amount of total crime is the same and the amount of undetected crime the amount of crime that has gotten away with is actually higher here all right we put the frequencies in the cells now what's the probability that they get away with crime well it's the probability that they commit crime and the cops are in the donut shop 830 thirds okay which is indeed a bigger number than for 20 seconds there the probability of crime that is caught by the cops is 430 thirds i'll just round out the calculations here 730 thirds of the time cops are driving around bored robbers are at home and 1430 thirds of the time is that adding up correctly yes it does 2 314 30 thirds of the time the cops are in the donut shop in the robbers are at home so that is a strange thing about this game and it is a strange thing that is true about any situation that's going to fit the cops and robbers model these situations where there's a unique mixed strategy equilibrium you don't change a player's behavior by changing their payoffs that's different than any other part of game theory okay when there's only a mixed strategy equilibrium and you change one players payoff you change the other players mixing strategy and it varies in context but in most of the context I could think of changing one players incentive causes the other player to change their behavior in a way compensates for it okay so policy changes that make sense in situations that would involve sequential strategic interaction or situations where there would be a pure strategy Nash equilibrium can backfire when players are playing mixed strategies so if I wanted her crime what should I do yeah the cops utility of being in the donut shop is exactly going to be the key okay so you're thinking if we want to change the robbers pay off then but the robbers behavior excuse me we need to change the cops pay off okay um so we could actually change either pay off we could have what we were trying to do is affect the cops behavior we could have changed either of the robbers payoffs here um what am I going to do here what's the example that I have in the notes that i'm going to post okay what I'm going to do here is rather than one thing you could do i think is you could give the cops bonuses for being on the beat what I'm going to do is I'm going to make it worse for the cops if they get caught in the donut shop okay so they are on supervisors are going to go around and monitor them and what I'm going to do is I'm going to do this on a different or real quick here so last version cops and robbers cops and robbers version 3 we're going to keep now the robbers payoffs from version 1 okay not going for those really over-the-top punishments here negative 5 and 5 0 and 0 from same strategies ok we're going to leave as we did before the cops payoff from three of the possibilities the same as in version one but now as I said a minute ago if the cops are in the donut shop that's really really bad for an ok they're going to get fired yeah the robbers go out and Rob people and they're in the donut shop maybe the waitress in the donut shop it's wearing a wire she's going to catch these cops and that's that all right so in this version is this change to the cops behavior is this going to have a look I can't ask it because I just answers that the change to the cops payoff is not going to affect their behavior right go through the same argument that we did here I'm changing the cops payoffs but the cops decision of whether to be on the beach or in the donut shop or not doesn't depend on their payoffs it depends on the robbers payoffs ok so in this game version 3 you don't even have to work it out you can just look over that what is the probability that the cops are going to be on the beat it's one half it's exactly right okay the cops probability of being on the beat okay this one remember how we derived it here we started to derive it here the cops probability of being under the beads depends on the robbers payoffs okay the cost probability of being on the beat is the one that makes excuse me I'm pointing to the wrong place the cops probability of being on the week I'm saying the right thing that I pointing to the wrong place is the probability that makes the robbers in different okay right if I'm the cops I'm being on the beat that probability has got to be the one that makes the robber is just indifferent between being at work and being at home okay that's how we find my probability okay so the numbers that go into that are negative 10 5 and 0 it's these the robbers payoffs the robbers payoffs here are exactly the same as in this game ok so in version 3 the cops behavior isn't affected so there's kind of the weird side a version 3 if instead of asking how to deter crime I asked how do you make the cops do their job increasing their punishment from not doing their job doesn't do the trick ok because they're still going to all you change is the cops punishment that's not going to affect their behavior but in this case it will affect the robbers behavior so let's look at that this just as above let Q equal the probability robbers are at work and this is the probability that has to make the cops indifferent between their pure strategies and that's going to change now because one of the cops payoffs is different ok so the expected utility the cops now of being on the beat depends on cue it's 2q plus 1 times 1 minus q q plus 1 same as it's always been well that's fine because I didn't change either of these payoffs the expected utility to the cops of being in the donut shop now is negative 10 q i get negative 10 now with probability Q because that's my payoff when I'm eating donuts and the robbers are at work plus 5 times 1 minus Q okay so negative 10 q minus 5 qu negative 15 q plus 5 okay setting those equal setting the expected utility of the cops of being on the be equal to the expected utility of the donuts q plus 1 negative 15 q plus 5 16 q equals 4 q equals 14 14 is indeed less than 4 11 so this does decrease crime and by decreasing crime overall it also decreases the probability of undetected crime so if what you're concerned about in cops and robbers version one is either the total amount of crime that's going on or the amount of crime that's being gotten away with the way to change that is to change the cops incentives if you want to look at it from the other point of view of what you're really concerned with is the amount of shirking by the cops um it total the amount of time the cops are spending in the donut shop if you're concerned about it up the cops HMO and you're concerned about obesity among the cops you want them out of the donut shop or you're concerned with the amount of time they spend in the donut shop when there's crime going on if you want to attack either of those things then the way to do it is to affect the Roberts payoffs so the homework you have this week is going to take you through a different scenario but one that's going to have the same flavor to it and what you're going to see is the same counterintuitive aspect of mixed strategy nash equilibria well you're also going to see is that it happens only as long as the payoffs are in the range that we just have a mixed strategy Nash equilibrium if i change the payoffs enough in this game i'll change it to a game that doesn't even look like cops and robbers at all okay i'll change it to a game that actually does have a pure strategy Nash equilibrium okay so if you don't want to I don't want to oversell this point I don't want to say that any change in a game with a single mixed strategy Nash equilibrium is always going to have this kind of counterintuitive effect it's going to affect the opposite player not the person whose incentives are changing but there are the other person they're playing against that's only true as long as we're still in a situation where both players have an incentive to be random okay change the payoffs enough that one of the players loses that incentive to be Rihanna me then you create a dominant strategy or you just create a pure strategy Nash equilibrium then you're back in the world that makes a lot more sounds change the robbers payoff you change the robbers behavior okay so it's only in the range of of mixed strategies that we get this kind of down the rabbit hole effect all right going once going twice on cops and robbers there's a time with cops and robbers all right well I'm you're racing i'm going to start segue into this question of representing sequential games in this form representing a sequential play as a matrix of payoffs rather than a tree i am going to do that well I'm erasing though let me talk about what I'm not going to do but what is covered in chapter 6 and what you guys do need to be comfortable well one thing you might be thinking about is changing a game from sequential to simultaneous or vice versa right that is changing the actual situation not just the representation of the game okay um that's going to affect the outcome so a great source of practice problems for you guys to do you can get a lot of practice problems out of this is take the sequential games that we went through the first part of class okay the ones i did in lecture the ones that you did on your homeworks the one on the midterm study sheet assume that they're all simultaneous now okay set them up as simultaneous games look for the Nash equilibrium they will be different games and in most cases you will get different answers it's possible you'll get the same in yeah you will get the same in some of them but it's that's because of the payoff it's not because of any regularity okay so what I'm seeing here is changing a game from a sequential one to a simultaneous one or changing it back okay so I only said half of what you could do for extra problems you can take the sequential games say what happens if they play simultaneously set those games off as simultaneous games and look for the Nash equilibria and you'll have a whole bunch new equilibria to blind they'll be different you can also for the stuff we've done since the midterm all of these simultaneous games you've done we've done you can look at as sequential game suppose one player gets to move first and here you have even more problems to do because you could say suppose one player gets to move first suppose the other player gets to do first in the any of the games where the payoffs aren't completely symmetric you've actually got two different sequential games that you can consider they're all different okay so slowing down my erasing here making this point but the point I'm be laboring is that when you change a game from simultaneous to sequential that's a big meaningful change okay you're saying something different about the strategic situation you're studying okay in a sequential game you say it's sequential what you're saying is that the second player can condition I'm going to get your question one second can condition their choice on the first players move okay so if I'm a second mover I get to see what you've done that changes strategy in an important way okay when you say it's a simultaneous game what you're saying is i have to make my choice before I know what you're going to do okay so same thing there these are meaningful differences so changing from simultaneous to sequential is changing strategy in a way that you wouldn't necessarily expect the answer to be the same yeah Kyra um Kyra points how's it with nature nodes it can be trickier to change it's not quite true that you can't change games with nature nodes and actually I think for the nature nodes we did this year you always can okay when a nature node is the very last thing in a tree okay so let's just say we have i'm not going to draw the full thing but i have player one moving player to moving and let's say that there's a nature node that's associated with just one of player two's payoffs think about what's the first thing we do when we um salt a sequential game the first thing we do is we replace the nature node with the expected value okay and you can put that expected value as a payoff in a game matrix okay it's your your point is well-taken that if you have nature nodes farther up the tree then that gets a whole lot harder to represent and there I might even sure if it makes logical sense when you have those kind of nature nodes to think about things happening simultaneously but this particular kind just use but use an expected payoff there one of the things that I haven't quite emphasized but it's worth emphasizing I'm glad it's come up is that once you have an expected value the fact that it's an expected value and not a regular pay off that you're going to get with certainty doesn't matter last week when we talked about evaluating mixed strategy nash equilibria to see if they were plur 80 optimal or not I said just calculate the expected payoffs and then compare them to all the others and once you start comparing them to all the others it doesn't matter that they're expected values really that's what we're trying to do with expected values we're trying to get something that we can compare to certain values okay and so the same thing here if we can do the expected value calculation we could put it into a game and just compare it as well as a regular pay off and that certainly expands our ability to use game theory the ability to use those expected values all over the place okay so changing from sequential two simultaneous play changes the game changes the result what I want to do now though is a smaller thing I want to take a game that is sequential and I want to represent it in normal forme okay most of the time you're not going when I do this so I said at the beginning I'm doing it because it's the best way I know to show you guys a point i want to show you about Nash equilibrium versus roll back ok so what came am I going to do what sequential game I going to represent in normal forme ok I'm going to go back to your first problem set first and second problem sat ok this is the superintendent right remember him and he wants to have his unqualified crony picked his principal over somebody who's actually qualified there's a community leader who does not want this crony running the school's the community leader can conduct a publicity campaign or not or not for either type of choice here and I think this what I'm doing here I didn't write down what specific version of the problem it corresponds to but i'm pretty sure it's the very first one that you did where the superintendent's values was were five from the unqualified principal zero from the qualified one so five is the value of getting my buddy a job compared to zero for some person with a PhD in education what do I care about them but mine is seven for the publicity campaign I have to run for office I don't like people campaigning against me the community leaders preferences were a zero for this crony running the schools that doesn't look right can't be a negative 8 for the qualified does it make sense does anybody have that problem candy the what the payoff here for the community leaders I'm going to go well somebody is looking up that number for me too is that just doesn't fit what I'm doing that's what it is Thank You Tiffany that's that makes sense they're negative 8 for the unqualified 1 and 0 for the qualified 1 this is a scenario thank you where the community leader wants a qualified principal and the superintendent doesn't and the last part of that is the community leader cost them negative 4 to do the publicity campaign okay so if we have the unqualified candidate and a publicity campaign the superintendent has five from the kind of principle she likes but minus seven from that publicity campaign if you've got your several of you guys got out your homeworks here so follow along and make sure I'm getting this right so the superintendent's payoff here is negative to the community leaders payoff is negative 8 from the unqualified guy and negative 4 from the publicity campaign sounds like negative 12 okay right if we have the unqualified guy here and no publicity campaign that's a payoff of 5 for the superintendent I get my guy no publicity campaign it's negative 8 for the community leader over here if my superintendent picks a qualified principal and there's a publicity campaign oh it's terrible for me find the superintendent I didn't get to hire my friend so he's mad at me and there's a publicity campaign against me too it's negative 7 they are from the community leaders point of view it's like okay at least we have a decent principle but we've got a the cost of a publicity campaign to bear and here the qualified on candidate but no publicity campaign gives a superintendent a zero okay no plus here but no- there and a zero for the community leader as well so that should look like the Persian of this and just strolling down memory lane thrown a net um and solving this to roll back the way we did before if we get to this branch the community leader would rather have negative eighth and twelve social choose not if we get to this branch the community leader would rather have zero 10-4 she'll choose not strategic equivalent here 005 negative eight their superintendent says well I can get five from my unqualified crony or zero from the qualified guy I'll go with Mike Rooney ok so the rollback equilibrium here i'll write it out in detail is superintendent chooses unqualified community leader chooses not if unqualified not qualified this version of the game had sort of a grim moral to that not an unrealistic one this is one where the community leader really was not able to exercise any kind of meaningful frat over the superintendent because the superintendent knew darn well that the community leader was not going to conduct the publicity campaign the community leader in this scenario didn't get any extra benefits from the key from the publicity campaign just had to go to the effort of it okay so there's no deterrence going on here unlike other versions of the game okay but we're just going to stick with this simple one here all right so what I want to do now is I want to represent this scenario in a normal form okay and what's going to be weird about this is we're going to have to get the strategies right in a lot of ways sequential games are easier than simultaneous games on they don't have multiple equilibria don't have mixed strategies at least not the ones that were looking at um but they do have these complicated strategies and simultaneous games strategies have been really simple right remember the problems that we did at the beginning of the quarter that involves strategy counting right to ask you to list all the players strategies and if here's something where we actually might make use of that if we say what are the strategies not equilibrium strategies all the strategies for the superintendent there are two strategies for the community leader there are four right the community strategy community the strategies are pc if unqualified pc if qualified pc is unqualified not if qualified not if unqualified pc qualified not yeah i'm qualified not qualified remember that the strategies in a sequential game have to tell the community leader what to do not just at the node that we expect to happen but at every decision node okay the equilibrium strategies have to tell the community leader what to do on the equilibrium path and off the equilibrium okay so we have four total strategies if we want to represent this game as a matrix we have to have all four strategies in there and we have to all four of these different strategies so this is how we're going to do it superintendent let's make them the role player their strategies or easy I'm qualified I'm gonna do qualified on the top here just to make it match Glen Road it here qualified here unqualified there ok so two rows 2 strategies but I'm going to need four columns I need one column for each of these different strategies here so the community leader strategies one is PC of qualified pc if unqualified here's pc pc we have actually going to write em out here I trained you all mess this up if I don't okay so i'm writing pc of qualified pc if not qualified here we have pc if Q naught if unqualified here we have not if qualified pc if unqualified and we have not if not that excuse me not if qualified unqualified okay now I realized as I was writing this that I switched the order here right here I was using the unqualified node the unqualified branch to be the first part of the strategy and not if the second you guys see that I'm just do you see that what I'm doing doesn't change what I'm representing it's just the order okay that what because these are the same four strategies I'm just listing the qualified part first you can tell if you see or not okay so that the thing to see is that this is PC a qualified pc if not that's this one pc have qualified not if unqualified that's this one okay it doesn't matter it's I'm doing it this way because I've got my qualified row here first and that's my convention okay it's an example of the kind of thing where you need to be organized you need to do it one way not so good if you switch mid representation which I guess I'm sort of doing here but okay so how many people see what I'm doing think I'm the laboring a point I should get on with it yeah okay good how many people don't know what I'm doing and wish I would rewrite this game to match the other one you guys are okay okay good good glad to go that I hope all right the point is here these are four different strategies okay so now we need to put the payoffs that go with each of these strategies okay so if the superintendent plays this strategy and the community leader plays this one the superintendent picks a qualified crony and the community leaders strategy tells them to do a publicity campaign no matter what okay what are the payoffs going to be here that's right negative 7 and negative 4 yeah okay the superintendent's picking a qualified candidate the superintendent's choice here is telling us what part of the community leaders strategy is going to actually matter okay because we're in the qualified row the community leader is going to do the part of her strategy that corresponds to the superintendence choice of a qualified principal okay down here the community leaders strategy is the same but the superintendent makes a different choice now well in this case it's not going to matter because it's one of the strategies this is the same the community what are what are the payoffs in this case negative 2 12 right okay isn't it this is the unqualified guy here and there's a publicity campaign right this part of the strategy operating here the superintendent chooses a qualified principal and the community leader is playing this strategy all right what are the payoffs here choosing a qualified guy there's Vegas I before they're the same right they're the same because this part of the strategy lime as a community leader going to do when the superintendent chooses a qualified crony is the same in each of these columns okay what about down here now 5f great very good down here I'm choosing the unqualified one and the community leaders strategy now says oh we're now we're in the unqualified case so no a publicity campaign campaign there there's my negative 8 marching lon here we're choosing a qualified principal here the community leaders strategy says no campaign for qualified yes for unqualified or to the payoffs here superintendents not trying to hire any cronies here the community leader sees that says oh okay when the superintendent walks the straight and narrow I don't need to campaign against them it's 00 okay what about down here I choose the unqualified the community leaders playing this strategy negative 2 12 right I think I heard more than one thing but I did hear the right answered here okay this is the same payoff as those because the superintendence making the same choice and even though the community leaders strategy in column 3 has different actions for a qualified principal it has the same actions for an unqualified one so the payoffs have to be the same here here we have a qualified principal and no publicity campaign either way what are the payoffs here 00 good and what about here 5 negative 8 very good okay okay student now I'm seeing that you guys are getting not only how to represent the strategies correctly here but also how to figure out the payoffs okay the fact that this game we're representing in normal forme has a sequence is still mattering here okay it's nattering in the number of strategies we have to consider for the second mover and it's also mattering for us to figure out what happens in each of these cases okay I presume that you're kind of seeing as we go along why we don't normally do this for normal form gains it's harder to set up this way this way is much much easier yeah so outside of this one example do it this way ok but in this one example what I want to do here is what I wanted my goal with this example is to show what is the difference between Nash equilibria and roll back okay there is a difference so let's look for Nash equilibria in this game all right so is this cell a Nash equilibrium no right because if we're in this cell if I'm the superintendent I'd rather be down here so no Nash equilibrium here um is this cell a Nash equilibrium well no here if I'm the community leader I can get a higher payoff there hey um this one similarly that's a negative there that's getting a really bad selfie racing pen here that cell is not a Nash equilibrium because find the superintendent I can get a higher payoff here um what about this cell the Nash equilibrium right okay so here's one think about that we know in these games we can have more than one Nash equilibrium so we're not done here but people are kind of raising their eyebrows that this corresponds to a Nash equilibrium but let's just look at it if we end up in this cell I'm the real player do I have regrets no I'd rather have 5 and negative 7 I'm the column player do I have regrets not really I can't do better I can do worse but I can't do better was that what you were gonna ask about piata when I'm comparing for the column player I'm only looking at the bottom cells because the column player can't control the rows for this cell no not for the cell okay when I was looking at the cell i did and we're going to move up here now to this one ok and i'm going to say is this Nash equilibrium where well let's c 0 is better than negative 2 so it's looking good for the role player for the column player looking pretty good too i can't do worse ok I can't do better excuse me I could do worse I could do the same but this is another Nash equilibrium here ok this one we know isn't by the same logic that let us know that the zero here is better than two so this one isn't let's keep going this one no right the superintendent would rather have the five but what about this one yes ok well that's no we got over here right here we got one equilibrium and throw away the booby-trapped pin this one where the superintendent chooses an unqualified the community leader chooses no no this is one of our nash equilibria right ok this way i'm going to call it number one because this one is the rollback equilibrium okay right that's what we predicted out of this game no mostly colibri here we said what's going to happen is the superintendent is going to choose unqualified the community leaders best strategy their equilibrium strategy is no now but over here we have these two weird equilibria and they are weird okay let's not think about what they mean in words and you will see just how weird they are hi let's take this one first this is the somewhat less weird one in this equilibrium the community leaders has superintendent if you pick a qualified principal you're off the hook I'm not going to be in your face but if you pick your crony I'm going after you okay it's an equilibrium here because the community leaders bluff is never called the reason why this didn't look like the right choice in the sequential game here is that this part of the strategy this part of the strategy I'm going to label this is a non credible threat hi this is a threat that the community leader might say they're going to make but if we got to that point if we got to the point where the superintendent actually chose the unqualified principal the community leader would not want to stick to their strategy okay it's a Nash equilibrium because in equilibrium if the superintendent believes the non credible threat the community leaders not going to have regrets okay so a limitation of Nash equilibrium I think this is it's the first bad thing I said about Nash equilibrium el Corredor it might be but it is the biggest limitation of the Nash equilibrium idea is that you can get Nash equilibria that rely on threats that would never be exercised if we got to that point yeah if the superintendent believes that the community leader is going to play that so the community leader is a computer program that's not strategic it is just going to do the strategy she's programmed to do then the superintendent will indeed hire the qualified principal the superintendent will respond to the threat but a lot of superintendent a lot of people in politics a lot of people in all sorts of scenarios and life will say she's not gonna do that okay she might say she's going to do the publicity campaign against me but it would hurt her and she'd get nothing from it okay so roll back the reason why we used roll back in the sequential games and the reason why it all sequential games you use roll back instead of the broad idea of Nash equilibria is that roll back does not rely on non credible threats I'll write that over here in rollback your equilibrium strategy cannot contain non credible threats even off the equilibrium path okay the reason why you get Nash equilibria that will have these non credible threats in there is that they will be parts of equilibria where the threats off the equilibrium path we think in a lot of cases non credible threats don't affect strategic behavior I might tell my kids that I'm going to ground them until they're 40 or something like that but they don't really believe me no matter how horrible a thing they do because they know I don't really want them living with me till they're 40 okay there's lots of other punishments that I think most parents the world over threaten their children with in the heat of the moment and strategic creatures that they are the kids just see right past that typically okay this equilibrium also relies on a non credible threat it's even weirder okay usually some people will say well maybe this might really happen maybe your kids don't believe you when you Bluff but sometimes people can get away with non credible threats i think what i'm going to do is I'm going to say the last thing I have to say on this on I'm actually going to finish it today then we'll be done for this and we'll start repeated games okay this is the weirdest equilibrium all fall this is an equilibrium where what the community leader says is if you hire the kind of principle I want I'm going to do a publicity campaign against you but if you hire the kind i don't want I'm going to attack you why would the community leader ever do that okay well it seems like a very strange thing for the community leader to threaten to punish the kind of behavior she wants and reward the kind of behavior she doesn't want but if she's playing that strategy then the superintendent will go ahead and make the choice she doesn't want okay and she'll never be asked to punish the kind of behavior she was she'll never be asked to campaign against the principle that she wants but this is not only a non credible threat it leads to a bad outcome okay so if you might think that we might consider this happening the argument for using roll back in sequential games instead of the full set of Nash equilibrium is really strong if you um if you consider this kind of equilibria hi last thing just some language that I will use that won't take long just give me a sec here guys the more technical name for rollback is that it's a sub game perfect Nash equilibrium okay and I just want to emphasize that the rollback equilibrium the sub game perfect equilibrium is a subset of the Nash equilibrium okay so what we find when we do rollback is we always find something that is a Nash equilibrium and it's a reasonable kind of Nash equilibrium the kind of Nash equilibrium that rollback won't show us are the kind that are supported by non credible threats and most of the time we think that those would not be good predictions anyway okay so there we are we took care of that we're going to start repeated interaction on Thursday
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_5_UCLA.txt
Hey so today we're going to ratchet the level of abstraction up a little bit I've said a couple of times as we've been going through the fundraising game that when political scientists and other social scientists use game theory in their work that we generally don't put specific numbers in our payoffs okay that those specific numbers can be helpful because they're very concrete they're very easy to look at and interpret but there's always this kind of nagging feeling in the back of our heads of a reminder that we just made up those numbers and as I've said we make up the numbers in a way that reflects our ideas about what people want reflects our ideas about the players or their preference okay what's the most preferred outcome - the least preferred outcome the order of the payoffs should match that and that they also represent our assumptions about the players intensity of preference okay so if two different payoffs have if two different outcomes have payoffs that are close together numerically then we think that the player doesn't kill a lot about the difference in those payoffs a difference of one unit of utility might seem small in comparison to a difference of ten units of utility okay this reflects something that we do all the time in ordinary life we think about do we care about something a lot do we care about something a little when we're negotiating with other people in our lives that's something that will frequently trying to communicate to each other if you're trying to work something out with somebody that you're in a strategic situation with one of the things you might be trying to communicate about yourself is what issues you're willing to compromise on what things are really important to you what your relative intensity of preference is and at the same time you're probably trying to figure out those same things with regard to the person you're bargaining with do a lot about a little about one facet of an issue is there some way where each of you for example could get the outcome that you care the most about and give up something that you care the least about we represent the those kind of ideas in game theories with our payoff numbers but again it seems a little bit hokey to make the kind of assumption that I've been making all along and the fundraising game that the difference between winning and with fundraising and winning with out fundraising is these two units that we don't even know what the unit's would really be okay so what I'm going to do now is something much closer to the way game theory is used both by academics and I think by people who use decision science in their day-to-day jobs by people who do strategy in firms in in the military in campaigns on certain medical settings is another situation where you might have these kind of variable payoffs being used okay so what I'm going to do is Anna food we're still on the fundraising game this is the really the last topic I'm gonna do with the fundraising game by the time we get to the next topic we'll be using a different example but because the variables introduce an extra level of abstraction what they're basically allowing us to do is to talk about a whole bunch of games at the same time so we're really we're expanding our analysis to not just be about one particular example that we could imagine but a whole bunch of possible examples because that's a pretty big leap I'm gonna keep the specific scenario that we're working with the same okay so it's still same story incumbent raise funds not Challenger these funds not so same assumptions about who the players are and what their set of moves are in the same assumptions as well about the outcomes okay so I'm still going to assume that the incumbent with here wins here and then here the only way the Challenger can win is if the Challenger raises funds and the incumbent doesn't what I'm gonna change though is I'm gonna change the payoffs okay so let's just put the new payoffs over here and I'm going to use variables to represent different aspects of the payoffs okay so how am I gonna do that you try and use variable names that will remind me of what they stand for so V sub I is the value to the incumbent of winning the election okay so there's other things that affect the accumbens payoff they're out there too but regardless of what those other things are did they have to campaign or not did his party win the election are things going on his family life whatever it is if he wins the election we add there the VI to his payoff okay so we add this variable to the incumbents payoff whenever the incumbent wins and similarly will NBC represent the value to the Challenger of winning can I'm going to use this kind of system a lot you'll find it used throughout social science the V stands for the value of winning and trying to make that something you can remember and I'm using the subscripts here to denote that this is sort of the same thing that applied to different players okay so we often do that we often use subscripts or superscript little letters or numbers up here or down there to the no differences among related things if we thought that these were going to be the same or we didn't care about the differences between them we could eliminate the subscripts but you might be thinking well I think one of the points that you raised last week actually about the game is maybe the incumbent cares more about winning than the challenger would so by having these subscripts here what I'm doing is I'm kind of reminding us that these values might be different okay just because the incumbent has a very high value on winning the election that doesn't have to be true for the Challenger and vice versa okay so what else do I need well I need to think about the costs of raising funds and I'm going to represent those things the same way P sub I is the craft to the incumbent of raising funds okay and C sub C here is the cost to the Challenger raising funds okay so what I'm doing here is a more general form of the table that I put in the same spot on the board I think it was about a week and a half ago where I had the payoffs that involved ten a three and one right those were my numbers these are my new assumptions about preferences and they're less right now assumptions and they are definitions I'm saying that the payoffs in this game are composed of two variables okay the other day that represents how much they care about the election and another variable that represents how much they dislike fundraising okay the other assumption I'm going to make here are sort of part of the the rest of the assumptions I'm going to make is I'm going to assume that all of these variables are positive I don't have to do it that way okay you may be seeing another way that I could do it that I'll talk about in a minute but in general it's easier to assume that all of our variables are positive and then to use negative signs when we put them in the payoffs here okay it's just easier to manage if we keep all our variables in the positive range sometimes that might be reasonable in which case we wouldn't write down when I'm writing down right now but in this case we're going to say that the value of winning the election is greater than zero or I lowercase R is either the incumbent or the Challenger okay so I'm deliberately introducing some notation that you're going to see in the book you're gonna see in your homework problems you're going to use in your homework problems and what I'm doing here in the subscript is again I've got this capital V that denotes a similar theorem to the incumbent and the Challenger what I'm saying is for both of these guys when score case is either Y or C the value of winning is a positive thing these guys want to be in office okay so this this is an assumption here I just want to draw your attention to this place holder variable here okay so the little oil case all I means with this one inequality we're making a statement about a set of values that the place holder can take we'll do it down here too and okay so that's I think you guys would probably do on your own and probably do without the level of self-consciousness that I'm using here the other thing that I'm going to assume here I would urge you to do it this way too it's easier if you let the cost be represented by a positive variable but then subtract them from the payoffs okay the alternative would be to say Oh raising funds is a negative number okay in which case for example over here I'll start to get ahead of myself I'm going to write it the other way and then write it the way that that makes more sense this would be the payoff to the incumbent over here would be VI plus CI yep CI is negative right if we think about the CI is having a negative sign in it that's how we would denote it I'm saying don't do it that way I'm saying it's easier to think about the cost as a positive number and to say that the payoff to the incumbent here is the value of winning - the the disutility of raising funds the reason why it's easier is because it more naturally matches there when we talk about cost in ordinary language okay we would say that something has high cost what we mean is that those costs are high and absolute value okay that what they take away from by gel or from our utility is a large number it's easier to do that keeping the variable itself in the positive range and using the nature of the arithmetic that we use to put them together to denote whether it's off a good thing or a bad thing okay so don't do it this way think about your cost as being represented by pods of the variables and then subtract them when you combine them with other aspects of the problem okay so we're gonna think about these costs also as having positive value for both the incumbent and the Challenger okay one other point I think worth making right here whenever we talk about costs we're not we're talking much more broadly than about monetary costs or cost that would show up on a budget sheet or something like that maybe this is pile of what we're thinking about okay that fundraising for the incumbent in the Challenger is truly costly in monetary terms on those foregone wages there's money to put on events that could be fundraising so that monetary costs are part of this but if meant to include anything that diminishes their payoffs okay so you don't like fundraising part of it is because it takes some of your money but else because it takes some your time because it's a drag you just don't like it all of those aspects all the negative aspects of fundraising are meant to be captured here that's gonna be true throughout PS 30 through out applications of game theory that costs are meant to be broadly construed not just monetary costs so now let's put them let's fill out the payoffs in the game using these variables instead of using the numbers okay so going down this path in the tree this is the path where they both raise funds and what happens here you'll recall is that the incumbent wins right so what is the challengers payoff in this case it's got P sub C in it is it VC minus T sub C what about this I'm going to put in right here a couple of the other thoughts that were expressed here about the challengers payoff could it be VC minus EC could it be VI minus EC think about why they're not correct okay can somebody tell me why this one is not correct yeah positive DC where she wins for sorry challenger no VC for you this time what about this the child is they care about the incumbents utility okay so this is no Challenger does not when it's just like you know monopoly do not pass go do not collect $200 you don't wear em you don't get your VC sorry this is even worse the Challenger is never gonna get B sub I okay VI only goes to the incumbent okay if you look over here the aspects of the payoffs here only belong to one player okay so it's true that the income that wins here and they come to get some value from that aside from fundraising the Challenger gets no value from let me forget the fundraising here the Challenger gets no increment to her utility from the income of winning yes how do we know the Challenger doesn't win is your question there that's a good question that part is carried over from the game that we've been working with Hall a lot okay so let me just use blue as I did before and remind you of the outcomes here the outcome assumptions are exactly what they've been all along so here if they both raise funds the incumbent wins incumbency advantage here the incumbent wears as fans of the Challenger does that wall even more so the incumbent wins and come to see advantage plus a challenger they can't really campaign over here when income doesn't raise funds this is the only weight of the Challenger can win and then over here we have kind of the low profile election nobody really tries to win and it's just all about incumbency advantage okay that is still looking at this set of payoffs over here what is the value to the Challenger of the incumbent winning I think I heard it kind of over here yeah talk to the read right I'm the Challenger what I care about in this situation is whether I win or not the incumbent winning doesn't affect my utility okay so I'm emphasizing that over here because that's this question I'll think I'll put it up here is that there's a base line can you need to establish this base line whenever you're translating a scenario into a game you needed to do this to do your homework for last week and you'll need to do it again this week and all the weeks of this quarter the baseline here not winning the election yourself and not fundraising is zero okay why am i picking zero because it's easy okay I wanted to pick another number if I wanted to say it's X the value of the rest of their lives it is true I could have this be X plus VI minus CI X minus CC and I could add EXO's to all of the payoffs that sense in solving the game and interpreting the game I'm only looking at differences between payoffs that baseline from which all of the other payoffs are defined the value of the baseline is just going to subtract out I'm gonna cancel out across scenarios okay so let's go in the rest of the payoffs here what's the incumbents payoff in this scenario VI minus CI very good okay the incumbents utility is not affected by rather the challenger raises funds I'm just going to clean up some board here while I talk and just as it was when we set up this game with the numbers the incumbents pay from these two branches is the same okay from the comments point of view rather the Challenger raises funds or not doesn't matter when she raises funds because her fundraising is going to blow the Challenger out of the water no matter what okay what's the challengers payoff over here zero okay coming down here what's the incumbent payoff here zero what the challengers payoff B C - C see you guys go into town now what's the incumbents pay off over here I'll get your question just one second VI VI by itself no fundraising and what's the challengers payoff 0 yes why would the incumbents negative CI is the question here I'm going to do the same thing I did before it kind of right in green some other possibilities okay so why not negative CI here okay because he's not raising funds okay because of this over here so if you'll notice here on both sides of the both of the nodes that are associated with this side of the tree the incumbent does not pay the cost of fundraising and that's precisely because she decided not to do that okay and now VI here because the incumbent doesn't win all right so one thing I want to say about this game and I'm sort of I'm saying this now is sort of a pause to see if anybody else wants to ask any more of these why not questions because those why not questions are extremely helpful okay and in general if you find yourself getting stuck setting up variable payoffs one of the things asking yourself okay what could they be why would this be wrong why would that be wrong it's something that you can it's a frame you can use effectively in a study group okay there's ask each other why not this why not that and off go to town on it so what you guys are thinking about that let me draw your attention again to what we have done here well we now have on the board with these variable payoffs is not just one game okay it's a whole family of games it's an infinite number of games if you want to think about that because these VI and COI variables even though we've said that they both have to be positive they're still if the number of values they could take on many more values then we need strictly speaking if we want to talk about what one individual game is it's one individual game comes corresponds to each set of possible values for the payoffs once you change the payoffs you change the game okay so we've got up here now is a whole family of games where the payoffs are represented by these variables okay I think I've got use the word parameters but that's almost in this context that would be synonymous with the variables I've said I've got parameters in this game the correspond to different specific payoffs different specific strategic situations the cool thing is I'm going to sign this game right now and in doing so I'm gonna solve that whole family of games okay so we're going to get the equilibrium not only for one set of values which is what we've done over the last two weeks but we're going to find out what the equilibrium is for every element of the family of games that's represented here okay all right so that is what we're going to do you know and I think I'm gonna get myself a red marker for solving that's been my solving color and one thing that's gonna help me and solve a a game of variable payoffs I recommend it to you you'll see the Dixons Keith do it as well as I'm gonna remember my nodes okay this is just node 1 node 2 and node 3 just gonna help me go through the logic here all right so in solving the family of games represented here I use the same lower back procedure I start at the bottom most row of nodes I figure out what's going to happen at each of the nodes I replace that node with a strategic equivalent and then I go up to the next level until I've figured out what each players optimal choice at each node is and then from that I'll be able to find the equilibrium and the equilibrium path okay so if I get to node 2 no 2 is a challenger node right so in solving this I'm going to be looking at the challengers payoff if we get to know to what is the challenge you're gonna do not raise funds same as same as before the reason why we know this is that by assumption we've said the PC is a positive number okay that those costs are really cost we really don't like fundraising so given a choice between a beef line of 0 and a negative number we rather have the 0 okay 0 does it sound like much but at least it's not negative ok so far looks pretty familiar copying the payoff associated with the branch that we expect to be chosen up here what's this call guys it's the strategic equivalent okay so I'm replacing this node with its strategic equivalent what about over here what do you think the challenge is going to do what's really sad is it depends on whether BC is greater than CC right here that was exactly what you said but that was that was your point what happens at this mode depends are the specific values these variables take this is different than what happened before okay so to deal with this we need to think about two possible cases and this is going to be something you'll have to do whatever you solve whatever the most interesting games that you saw with variables the solution is going to involve dividing up those variables into cases okay so this is the way I would sort of proceed from your observation that what happens here depends there's two possibilities at node 3 this is why I want to have their numbers in here there are two cases okay case one is the case where VC is greater than CC and case two is where VC is less than CC so are the present I'm gonna ignore the case where they're exactly equal you might be asked them we're not gonna be able to say much and the case where they're exactly people okay so I wants you to find the cases you just go forward and analyze each case separately okay so suppose we're in case one sao paulo's VC is greater than CC okay then what happens here raise funds that were in the case that we were in with the game with numbers okay node three challenger should raise funds mr. Teja kequivalent here is zero B C - C see what happens at node one it depends again okay and what i'm doing here is i'm indenting okay so we're in case one here i'm just looking at case one supposing that we seek will see see now at node one there are some cases okay and the fat cases are the parallel here I'm going to use letters to denote the sub cases here okay so sub case one a its case one so VC is greater than C C that's what we've been picking out so far some case one a is also that VI is greater than C I but we have to also think about sub case one B or the opposite is true here actually I think I'm done with this stop if we're sub case one a what happens at node one sub case one a does it still depend no this is the case we've been all along VI is greater than CI V R and minus CI is a positive number I'm the incumbent I like the positive number better than zero so through our sub case one a incumbent raises funds at node one abbreviating here I hope my abbreviations are clear the incumbent chooses to raise funds at node one okay so for one set of cases for one set of cases in this broad family of games we know what the fan equilibrium it's okay for sub case one a the rollback equilibrium I'm just emphasizing the equilibrium that we're finding through this where that process it's still going back even though we're using variables here is that the incumbent raises funds the Challenger does not if the incumbent does it does raise funds if the incumbent does not okay so we're done with so case one but we've got loose ends that we have to go back to we've got this other sub case one be to think about and then we've got all of case two so we're not done with the whole family of games yes yes doubt that would be possible right in this case what I'm going to do for that sequential game three part of that so I'm just going to leave you with we can't predict what's going to happen there okay when we get up to the midterm we start talking about simultaneous games we'll think about it is that the Challenger could make either choice okay the Challenger wouldn't have regrets from either choice I'm thinking here that the case where these two bar Nick wall and that's gonna limit our ability to predict what's going to happen yeah okay so then we're gonna focus on in this part of the class and what you're allowed to do on your homework to you can play by the same rules that I'm playing by here if you can ignore what we call these nice edge cases take it's kind of that he said hesitatingly ask that question because it's like gosh there's so many values the few things can take Euler's just that very one well they're the same so that one single case where you see is exactly equal to CC we don't know what's going to happen here okay so it's fine with me if you guys it's more than fine what you guys should do is not try to make a prediction about that case what would be wrong to do would be to say overall in this case maybe something else makes the decision that may be true in reality but it's not true in the game okay in the game we just don't know what the Challenger is going to do and we can't get to a unique equilibrium through that process okay so throughout this part when we're defining cases there's gonna be that feeling of incompleteness here there is one case we're not considering and it's the case that we can't do anything about right now okay but we can't do things about sub case 1b and sub case - okay so this is I'm gonna I'm gonna keep my colors nice here I had been using color to remind you guys I've been using the blue color to remind you guys of the outcomes but I think now you remind it you know what happens there so I'm erasing that blue because what I want to do now is think about sub case one B which is the case and then remind you of those aspects of the case here since we've kind of gone through one sub case all and we have to pick up another one partway crews how subcase 1b is we're in case one right okay so it's the case where the Challenger cares more about winning the election than they do about it wasn't wearing fundraising and the incumbent who is the opposite okay so some case won't be is a bit out incumbent okay your office is just not worth the hassle anymore the happiness I get from being in office is not worth the grind of fundraising my VI is now less than CI okay because this part is true just as it was above in sub case one be the Challenger makes the same choice is in one egg okay so case one B Challenger has a higher value of wedding then they're just like for campaigning so the challengers choice here is the same okay but now in sub case one B when the incumbent compares VI minus CI to zero zero is looking pretty good to the incumbent okay and so case one B because of this because of this inequality this set of parameter values being our focus in this case what the incumbent prefers to do is to not raise funds okay guys with me on that all right so case one B negative number zero rather have the zero the rollback equilibrium is the incumbent does not raise funds Challenger does not raise funds if the incumbent does loses funds if not so you may be sort of noticing some interesting similarities and differences here between the equilibria associated with different subsets of this family of games yes okay so William write man's question is when they're writing the equilibrium doesn't matter what order you write the two actions that compose the challengers strategy this is actually kind of a nice point to recap the nature of the challengers strategy that the challengers equilibrium strategy has to tell her what to do in both possible cases even though we know the only one thing is going to happen within the game the challengers strategy has to tell her what to do on and off the equilibrium path and what really is there to say is that up here we happen to write the challengers action the corresponds to the incumbents equilibrium strategy first and here we wrote that action second I would do it this way okay I would always write the Challenger or any second movie or third move your strategies in the order that the modes occur going right to left okay worked out here that that gave us the on the equilibrium path part of the strategy first but that was more of a coincidence okay it would be more confusing to be changing the order in which we put the Challenger strategies now it'll be a disaster to do it that way because I'm not abbreviating the challengers strategy as fully as I might I'm actually saying that here the Challenger choose us know if the incumbent Wiz is fun so I do know what node that corresponds to if I was using the very most abbreviated way the way that Dixon escapes do it would get confusing to be changing the order yeah it's a good question all right so we've got the Novak equilibrium for sub case one a sub case one B we've still got case to here okay so case one just had these two some cases we're done with that give a little check for a case one we're done we still have case - okay case two was one where we had to think about what would happen at node 3 and actually since we've spent some time analyzing sub case 1 we're diving back down into the game it's never wrong when you go back to another case to go back to the whole thing ok so in case 2 at node 2 what's the challenge you're gonna do not raise funds right okay this what goes on at this node is completely determined by the fact that minus EC is a worse payoff than 0 okay so to all cases the challenges choice at this node is the same sometimes that would be true this is the node where now we're in the case where the challenger's value of being in office is less than her value of campaigning so now in case - what's the challenge you're gonna do at node 3 not okay so now I'll just write that challenger chooses not that node3 the strategic equivalent is now different my rarity cheek equivalent was right for case one sub case one a sub case one be wrong for case two because in case two if we get to this path the strategic equivalent to the incumbent is bi 0 the strategic equivalent of node 2 in case two is the same so we'll make the thread n green doesn't that look pretty this is the strategic equivalent to both cases here maybe that makes it hard to read sorry about that okay in case two what's being kind of gonna do now raise funds exactly in case - I'm the incumbent I can win office if I don't raise funds because a challenger doesn't care okay I'm not here to challenge her I can stay in Washington work on policy hang out with them glamorous Beltway bandits whatever I want to do though I don't need to raise funds that's what I'd rather do okay so in case - then the incumbent chooses not ya one the way back equilibrium is incumbent not Challenger not if raised funds not if not so now with the exception of these knife edge cases okay the ones where the players are exactly indifferent between the value the office and not fundraising versus saving the fundraising except for those rare cases where VI equals CI or VC equals CC we've got a prediction for what's going to happen well maybe not quite I should say we've got an equilibrium got the Welbeck equilibrium for the three possible cases that apply in this game okay so we went from an infinite number of games to three possible things that can happen that's pretty cool okay collapsing a lot of possibilities into three cases here and what a way emphasize is the decisions about how to divide things up into cases we reveal to us through the rollback process okay so when we use that way back process you might be wondering well how do I know what the cases are where the back is going to tell you the way you know is if you're trying to figure out what's going to happen at a node and whenever the answer is it depends okay it was my answer from the middle back there the answer is it depends what you do is you think about the two cases okay when is it this way when is it that way these games with variable payoffs or another example of how we can solve complicated problems by dividing them into simple components okay and this is a point in the class where I will remind you of something I said early on the individual steps what we're doing at each step is very easy and you shouldn't feel nervous if it seems like yeah it's really easy to say that a negative number is less than zero that each individual step can be trivially easy what can make game theory hard to learn it can be challenging especially for the stage you guys are out right now is that there's just a lot of steps okay the role that process is a one that helps us organize a lot of steps no individual step is hard okay everybody here can do the individual steps it's just remembering how to put them all together okay put them all together it's not being freaked out of how many steps there are I think what sometimes will happen with people especially when they get to the point where they're having to do the second division they've divided things into cases and oh my god now we have to do sub cases - what is this going to end don't panic okay it's completely normal even if we have gone to another set of sub cases and perhaps come up with six possible alternatives it's okay it's normal it's manageable you know I'm not going to let you get to a point where you have with like you know The Sorcerer's Apprentice and you've got some cases coming out of your ears okay you can do a lot with a small number of sub cases okay so what I want to do now be back in black here is think about what the predictions actually are for these sub cases okay when I'm looking for the prediction one of the things I'm looking for is the equilibrium path right what do I really think is going to happen okay well so cases are familiar kicks right same case one is the case the corresponding to the game of specific numbers that we used before and the equilibrium path is the red one it's the one that we found before in equilibrium in this case what we can observe is the incumbent fundraising and the Challenger not what's going to be off equilibrium path the causal part F sub case one that we don't see is that empty incumbent had raised funds the Challenger would up okay but that we're not going to we're never to see that happen for in case one here is our equilibrium path and what I'm going to do is I will also circle the equal in path in the equilibrium here okay so the incumbent raising funds and the challenge or not here is our equilibrium path and the prediction is going to be a one-sided campaign incumbent winning okay it's gonna be the outcome that motivated this puzzle to begin with the incumbent will it be fun quite crazy walking away with the election we have a by standards to wonder if all that fundraising was really necessary okay so now I will think about what the equilibrium path and the predicted outcomes are in these other cases here let's think about the blue case okay so case one be something that you might notice as you're comparing 1a and 1b but notice that in some cases one and one be the challengers equilibrium strategy is the same okay see I didn't even put blue here this is the action that was optimal for all parameter values but the blue and the red choices are identical for the Challenger okay there's no difference in the challengers equilibrium strategy in case 1a and 1b what is different those equilibrium path okay the challenges doing the same thing if we move to sub case 1b but now because we have a different kind of incumbent we see different behavior on the part of the Challenger okay so that that's an important point the Challenger strategy is the same here that what part of the strategy actually see depends on what the incumbent does first okay so the equilibrium path here is this an incumbent not raising funds and a challenger raising funds okay so the equilibrium path here yes what's the headline challenge deposes tired incumbent that's not the catchiest headline I guess I made a good choice not to go into journalism but that's what we would see happening okay in this case even though we've got the same challenger that we had in sub case one he was seeing the Challenger act differently not because her preferences are different but just because her strategic environment is different the Preferences of her opponent are differences um the question is when I ask you on a homework or on the exam what the equilibrium path is do I want you to write it out like that and that's that's a good question yes I want you to write out the equilibrium path in English it's getting late I'm being a little bit silly with my vigorous challenge deposes tired and come up what I do want you to do and actually if you if doing it pushes you to be silly it's okay I want you to think about what the equilibrium path would look like in reality okay if you could kind of get a very picture in your mind yes the Challenger and you come but we're playing this pair of strategies what would I be observing what would be being written about in the newspaper what would the world look like okay so this final part of looking for the equilibrium path thinking about the predicted outcome this is the strap that we are translating our analysis back into ordinary language we've transferred the problem into game theory we analyzed and we thought about the things that could happen now let's think about what that's taught us about the real world okay final case here the case where the incumbent does not choose to fundraise and the Challenger does not under either circumstances this is one where the Challenger strategy is now different then it's been in these other two cases the thing I was emphasizing about this case was that the challengers strategy didn't change here even though it didn't change the equilibrium path still changed here the challengers strategy is different you might actually observe that the incumbents equilibrium strategy is the same as it was in sub case one beat okay so the equilibrium path here it's the third possible thing what can happen is what does this look like this is happening what does it look like in in the world this look like much right no compare any other side this is a very low-profile erection lots and lots of Elections I would say fit this profile ok we just don't see very much campaigning on either side nothing really happens one person wins because of something about her in this case of incumbency status may local elections if you've been involved in groups clubs and things that have to elect offices it's often the case that people don't really value being the president of the marching band enough to campaign or something like that and that elections involving non politicians often have this form ok and indeed that we might be surprised to find campaigning going on it the point of political science of being elected chair of the department is something you'd be surprised if somebody actually campaigned for that because in general we think that it's not that great a job to begin where I've been why would you go to the extra work of the campaigning defilade that's a good question and then sort of elaborate a little bit on it first of all she's wondering whether in writing out the equilibrium path we can just say something like in this case the incumbent does not raise funds and the challenge does not waste funds again let me emphasize here but that's enough for the equilibrium path because the equilibrium path just wants us to emphasize the part of the challengers strategy that we see played okay so technically speaking that is the equilibrium path on your homework if I say something like what do you expect to happen then what I'm wanting you to do is elaborate a little more okay I'm wanting you to Brandon to as Brandon was suggesting think about what it would actually meet okay so that's sort of that's I guess that's my code equilibrium path has a formulaic solution the part of each player's equilibrium strategy that we expect to see played what do you expect to happen that's kind of your cue to get out of the lingo of game theory and start interpreting it's there another question around here okay all right so now it's kind of step back what we've done a lot today reviews these variable payoffs to represent as I said this huge family of games an infinite number of games that are the fail in terms of the players the strategies and the outcomes but differ in the payoffs okay for this family of games we've now solved for the equilibria of all that infinite number of games and that they fall into what's basically been three cases or the way I wrote it here two cases that depend on the challenger's payoffs and one of those cases divides into sub cases here for all the possible infinite number of values that can be represented with this game there's always three possible things that can happen okay so that's kind of reassuring and what the analysis is done is it's told us where them where the boundaries are between the cases what the cases are is something that comes out of the solution of the game how do we know that the difference between case one and case two depends on the relationship between VC and CC we found that out through the rear back process we found out because when we're going through the role of that process we got to a place where we couldn't fall a node without dividing things up into cases okay so that's how you know the part that we get from solving the family of games with variable payoffs here is an understanding of what the relevant cases are okay we might be without knowing game theory look at this case and look at this scenario incumbents and challengers raising funds and not saying well what happens in this interaction is going to depend on both of the players feel about both winning the election and about fundraising I think we could get that far with common sense even get far enough to understand that the relative evaluations would matter but if we want sort of a precise sense of where do we expect to see the one-sided campaigning with the incumbent winning when do we expect to see challengers overthrowing incumbents and when do we expect to see the low-profile elections the process of solving the family of games to roll back gives us that another issue that I kind of want to emphasize right now is there are three equilibrium here on the board but while one emphasizes that each individual game has only a single equilibrium each individual game the corresponds to a specific value for the CCC bi and CI each specific game will fall into only one of these cases so we have different equilibria for the family of games but each member of the family has only a single equilibrium and you might be sort of nonplussed by that remark now but unfortunately after the midterm we don't have to deal with cases where a single game can have more than one equilibrium that's not happening here okay another way to think about that is if we know the specific values of the i CRI CC and VC as long as they're not in that knife-edge case where they're equal we know that we predict will happen okay we have only one prediction okay we're not going to once we have numbers associated with the variables we're going to know which of these cases will occur that won't happen with simultaneous games okay so that's just looking ahead they are okay there's a couple more things I want to say about this but actually I think I'm gonna do is I'm gonna better work now and you guys may have seen me do enough work then you can get right to work on problem I've shown you could you can do a whole little more - now if you want it you got a good weekend coming up you're equipped to do that whole problem set I actually even if you um aren't inclined to finish your homework a week early I would recommend just trying something with the variable payoffs before Thursday because just trying will enable you to come back on Thursday and tell questions at me if it looks a little bit harder when you do it by yourself okay so we'll do that on Thursday and then we'll get into evaluating out
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_6_UCLA.txt
okay I've got a couple more points I want to make about using variable payoffs one thing that I need for you guys to be able to do your homework this week s you to be able to do this is to be able to set games up and sort of a middle ground between the two ways we've done it okay we first did the fundraising game where all the payoffs were expressed in specific numbers then on Tuesday we did it with all the payoffs expressed as variables and sometimes you will find yourself wanting to do both of those things so you'll want to do a mix sometimes it will make sense to have some of the payoffs express the specific numbers either because they sort of naturally fall out of a situation or because it's an exam and you're given some of the numbers I guess those would be the two sets of broad sets of reasons so I wanted to do one example for you or some of the payoffs our numbers and some are left is variables and kind of give you a sense of the kind of questions we can ask with that kind of mixture okay so it's same players same stories as before incumbent and Challenger raised funds not now what I want to do is as we did on Tuesday I'm going to let the value of winning office be represented by a variable and I'm actually just going to use V okay I am going to say that V is the value of being elected for both incumbent and Challenger okay so if I'm the incumbent and I get elected I get V and you get zero you're the Challenger if you get elected you get V and I get CR we're just sort of assuming that we've got the same value of office there we're going to assume that we have the same cost associated with fundraising and what I'm going to do for this example is I'm going to put a number on V I'm gonna let V be five but I'm gonna leave C a variable okay so with that that's actually enough to do payoffs with those two assumptions about payoffs same assumptions as we've had all along about outcomes so that the incumbent wins here here Challenger wins their income but wins there what's the incumbents payoff here v minus C challengers payoff challengers payoff negative see thank you good job guys okay over here incumbents pay off five - see same as it's been challengers pay off yeah income and pay off here zero challengers five - see incumbents five challengers zero I know sometimes when it gets so simple you can make a mistake just by going too fast okay so these are the payoffs now so we've got a game that has some variables in some payoffs and just as we found on Tuesday and solving the game we're gonna find cases okay so let's just start solving the game if we get to node 2 here same numbers that I had before I'll put my numbers back in just to be organized here no - the Challenger is going to compare a negative C to zero and decide that zero is better than a negative number not raise funds strategic equivalent the payoff associated with excuse me the payoff associated with the optimal choice right challengers optimal choices not to raise funds zero is better than negative C from the incumbents point of view the strategic equivalent is five minus C or zero over here we look at the challengers choice and here we have cases right here what the incumbent does depends on whether C is greater than or less than five okay so there's two cases that I'm just gonna abbreviate a little bit here I'm gonna say case one where 5 is greater than see in that case this number is greater than zero so the strategic equivalent here is zero five minus C okay in this case where five is greater than C the Challenger is gonna raise funds here here's my strategic equivalent I did before I'll switch colors for the other case case 2 here or five is less than C in this case this is a negative number over here the Challenger would rather have zero than have a negative number in this case the strategic equivalent is five zero okay so now we've got the two strategic equivalents I'm writing this in a more compressed way than we wrote it on Tuesday but I hope very much that you're seeing that it's the same logic here that when we get to a node where the choice depends on parameter values we divide that node into cases and then we have to solve the remaining part of the tree what's ever above the node where we divide into cases we have to solve it separately for case one in case two okay there's not much above it here so that's not too demanding okay up here in case one just reminding ourselves it's five minus C in this case the strategic equivalent for the incumbent is five minus C over here zero over here because we're in case one okay this is a little bit simpler than it was last time we know that five is greater than C okay so we know here that five minus C is a number that's greater than zero so the incumbent raises funds we're not done because we still have to take care of case two to five less than see now we're comparing five minus C to five no matter what as long as he's a positive number we'd rather have five by itself than have five with something taken away from it so in this case the incumbents optimal choice at that first node is not to raise funds okay so in this overall game we have the rollback equilibria for case one when five is greater than C's reminding myself of what case one is is incumbent raise funds Challenger not if incumbent raises funds but raise funds if not anybody's way here and the rollback equilibrium for the other set of cases case two where v is less than C is that the incumbent does not fund raise the Challenger does not fund raise if the incumbent does fundraise and does not if not either what's the outcome in case two who wins the incumbent and in case one case one corresponds to the case we started with the incumbent also wins okay another point I want to amplify somebody asked this question last time about the case where five is exactly equal to C okay and I said well it's not much we could do here you are gonna encounter this kind of case on your homework what do I want you to say okay well an example is something that would be perfectly fine to say in this case would be when v equals c we can't tell we don't know i don't want you to make up some complicated story about what they'll do when v equals C when v equals c the rollback process here is not giving us a single optimal choice all right so right now this other knife-edge case we're just gonna be satisfied at this point we're saying that we can't tell later on in the class will still not be able to tell what happens although we'll go to a lot more work to realize that we can't tell right now we'll just stick with the punchline okay so um we got two cases here and three cases on Tuesday why yeah sorry we gave me a number okay that is one thing we did so why only two he says there's actually another part of it though there are numbers we could give to the value of being elected that would still give three choices yes that's where the rabbit goes in the head like Paul Paul's um point is that in this scenario I kind of sweat in an important assumption here the important assumption was that the incumbent and the Challenger have the same values for winning and fundraising so what's missing from these cases here is the case where the Challenger cared more about winning than fundraising but the incumbent didn't okay that was our I think it was our case 1b in Tuesday's notes okay so let's rather than just cutting to the chase here let me compare our cases here quickly with details on what we found on Tuesday I'm belaboring this because my experience is that this is a place where Oh Sunday night when you guys are working on your homework and you're not sure how many cases you should have and things like that people tend to get insecure so I'm hoping if you see me go through the logic that helps you paddle a little bit more independently on your own okay so with okay some jargon here but part of learning game theory is learning the jargon that goes around and it was fully parameterized okay the quali parameterised model had four parameters parameter you'll recall is just a synonym for variable anytime we represent pay off as a variable we're speaking of it as a parameter of the family of games okay it's the thing that varies across the members of the family of games that were representing with the tree okay the fully parameters rise model had four parameters VI VC CI and CC and our cases were case 1 a where we had the Challenger valuing office more than disliking fundraising and same thing for the incumbent the rollback equilibrium is all do the most terse abbreviation here raise funds not if the incumbent does yes if the incumbent does not case 1 B is the case where VC was greater than CC but the I was less than C I hear a rollback is not raise funds and finally our case 2 was the case where VC is less than CC and for case two it actually didn't matter whether the incumbent had a higher value of being an office or not okay when you've got a challenger who doesn't care then the rollback equilibrium is everybody does not raise funds all the time the incumbent does not raise funds the challengers strategy is not not so in this game by saying that VI and C are the same and CI and CC are the same we didn't allow for this case we didn't allow for this asymmetry okay and we got rid of one of our Amin one of our possibilities yeah and the interesting thing was we got rid of the only possibility of the Challenger one okay by making that assumption we kept the case where the incumbent wins and the incumbent raises funds we kept the case where the incumbent wins nobody raises funds but we got rid of the case where the Challenger wins I think I was characterizing this as the burnt-out incumbent on Tuesday so what where you might find yourself using this mix of numbers and variables would be in I think you have some cases like this in your homework for this week where I give you some numbers okay or I refer you to a scenario where I've given you specific payoff numbers but then I ask you to let one part of the payoffs be a variable and that's a nice way to frame the problem because then you can ask questions like how big does the cost of fundraising have to be before we see it just before we see fundraising disappear okay and well the answer in this case would be five when you see questions like how does the outcome in the game depend on the value of being elected or the cost of fundraising what that's asking you to do is to treat that part of the payoffs as a variable okay so something you could say here is that what we expect to happen in terms of fundraising depends on how high the cost of fundraising is okay if it's less than C we expect the incumbent to do it if it's greater than C we expect the company incumbent to not do it some things don't depend on that in equilibrium here we don't expect the Challenger to raise funds at all and we don't expect the Challenger to ever win okay so given a specific set of assumptions about payoffs we can answer questions about how does the answer depend on some part of the past whenever you're asked to analyze the effect of one part of the payoffs or one part of the structure of the game that's a tip that that part of the game needs to be treated as a variable okay and you need to figure out what the cases are involving that variable okay what boundaries are how many cases there are and how different the case is on hey and there is no hard and fast rules about how many cases you'll have about when the equilibrium strategies will be different for both players which is how it worked out over here or different for one player but not the other which is how it worked out here whether you will have differences in every aspect of the every aspect here of the situation okay or whether they're part important parts of the situation won't depend on the cost of fundraising hey you might think at first so this is kind of a boring example because no matter what the incumbent wins but that's actually an interesting thing to know okay if this assumption that all politicians are basically the same they all have the same basic value for being elected and the same basic dislike for fundraising then it's interesting to know that however noxious fundraising it is fundraising is it's not going to affect who wins the election yeah all right one other thing I one of the point I want to make about the game I think I'm gonna bring the full game tree back down here and I'm always tempted to partly erase but I think that's a wrong idea that I can just write the whole thing again faster and erasing my red and blue here okay so in the fully parameterized game here I won't ask you but um audit me on this make sure that I get the payoffs correct zero look good those are our correct payoffs letting everything vary here not having this sneaky-ass UMP ssin that the two politicians are the same okay so now it's up there nice and clean and not colored on and I'm gonna color on it right away our three equilibria that we found for this family of games again I'm emphasizing that each particular version of the game that corresponds to a particular value for the IV c CC + CI each particular version has a single equilibrium but each particular the particular versions of the game fall into these cases that all have the same type of equilibrium and there's more than one equilibrium for the family of games and within the family the equilibrium paths that we found were this was one this was our case one a right this was our initial case that we started with all I'm doing here is the equilibrium path right now emphasizing not to fall equilibrium but in equilibrium in that case where both of them value office more than they dislike fundraising this is what we expect to happen in the case where it's intense ashamed to have lost it so quickly in the case where the Challenger under the eraser Oh bless your heart I really didn't want it thank you well not that I'm neurotic about my pads or anything but it was blue on Tuesday I would have to be blue today in the case where the incumbent was burned out okay the incumbent didn't like fundraising didn't think it was worth it to win office but the Challenger did then our equilibrium path was this one okay and then the third case where nobody liked fundraising this elected office really isn't that great a job anyway this was our equilibrium yeah okay so we had this possibility and this possibility okay I put it back up here now to emphasize that three different terminal nodes our equilibrium outcomes for some parameter values okay we can look at that from the other point of view too and notice over here that this one never is okay we were about as general as we could be with allowing the value of office to be anything the cost of fundraising to be anything allowing them to be different for the incumbent and Challenger and there was nothing we could do to make this outcome be an equilibrium path okay that by itself is telling us something important in particular it's telling us something important about the narrative that would go along on with this path where there's vigorous competition on both sides by the incumbent and the Challenger here that's kind of the high school civics portrait of Elections right that everybody campaigns really hard and that there's a a serious contest here well no not in this game that does not happen okay so one interpretation of that is to not to wonder why this doesn't happen again what we've been doing all along is giving ourselves an answer for why we don't see this why we don't see challengers raising funds and vigorously challenging incumbents that's one way to put it does that sound like a bit of an overstatement though I hope it does hi it's true there are many races especially for the House of Representatives and for state legislative offices in both chambers where incumbents don't face very strong challenges this it's wrong it's an overstatement to say it never happens hey it does sometimes happen in fact sometimes the challenges are even so strong that they defeat the incumbent okay so what I'm trying to do here is illustrate something I talked about on the very first day of class like the role the game theory plays in social science what I've done here what I've been really belaboring over the past couple weeks is I've spelled out a theory of incumbency advantage and the role of fundraising in a lot of detail okay the detail was I think worth it because we got a very good understanding of the different things the theory predicts what it predicts in different cases and some things that this theory says never should occur that by itself is very helpful okay the fact that the theory says something should never occur that occur sometimes it's not the typical thing but it does occur sometimes that is a springboard for progress okay so far we've got a logic that helps us understand part of what's going on in elections and we've got an understanding of its limits okay so if we want to go on and have a better theory this is the accumulation of knowledge that's supposed to make social science better than what we had before social science which was sort of everybody working by themselves on their own Theory instead of building off each other the reason why we can accumulate here is that we've got a very focused sense of where this theory doesn't appropriately apply okay so what are some things we could put in the model or some things we could add to the theory that might actually get this to happen in equilibrium any thoughts assumptions we could change yeah so one change Williams suggestion is that fundraising is not just yes or no okay that that would help make this more realistic okay and there's aspects of fundraising that I think that change would help us with I actually don't think it would get us out of the problem of predicting the challengers never raise funds here need okay nev says the incumbent might have a bad reputation might be associated with a scandal might be associated with a party that's got a problem I'm gonna put that in a broader category of different assumptions about outcomes okay in fact if I take off my game theory hat and put on my American politics hat what I would criticize this game for is for going overboard on incumbency advantage okay this might have been a little bit in Lillian's mind when she's thinking about fundraising being a continuous choice incumbency advantage matters it is important okay but this idea that if it incumbent raises funds at all he's going to win with probability one that's over the top okay so I'm going to say the incumbent might lose and looking ahead to the thing we're gonna be mainly doing next week I'm gonna really underscore the might now I'm gonna add maybe it's one way to get this outcome to occur on the equilibrium path would be nice scenario where the incumbent is in such bad shape that you just know he's gonna lose okay that's by itself definitely give the challenger an incentive to fundraise but even the possibility that they might it's the probability of the incumbent losing is high enough that by itself could turn the game around and next week the main thing we're going to do is add that aspect to our games any other thoughts on how to get how to get challengers sometimes raising funds yeah an overzealous Challenger yeah like what would that mean they want to raise funds yeah okay so just as I put Neves idea in a broader category what's your name I'm gonna put Joel's in a broader category as well which is a set of different assumptions about payoffs now you might be say well I'm being so general here I got it all parameterised but remember those assumptions I had hey one way to think about an overzealous fund raiser is oh oh you guys think fundraising is a drag I love fund raising bring it on let me do more okay and if you have the cost turning into benefits okay so what would that would mean is we would have to allow C to be negative so we'd be subtracting a negative number here that would also be enough to do it and I think that's not a fanciful example I'm a fundraiser bunny story that might be a little bit silly but more likely is a scenario that I think we talked about last week a challenger that's looking beyond just this election okay maybe I'll lose this time it's not just about this election I'm going to demonstrate my ability as a candidate I'm going to demonstrate what kind of a good politician I'm the unite or not a divider and next time along when there's an open seat perhaps the party will stand behind me I think that's actually a very important part of the scenario in reality okay so this little exercise what we've been doing here this really is what social scientists use game theory for okay you start with a set of assumptions that more or less makes sense that seem to describe a case that if it's not ubiquitous is pretty widespread you put those assumptions into the game and figure out exactly what that means and it often will give more predictions than you think okay but it often will also say that something shouldn't happen when you get to this kind of situation where something never happens in the game but does happen in reality in the philosophy of science people call that an anomaly okay something that your theory can't explain and anomalies if you're taking the short view and you're a researcher trying to get an article published or something like that you might think the anomaly is really a pain you don't really want it to happen but in the big view anomalies are what we learn from okay anomalies are what tell us what's wrong with our theory what's left out of our model what about our initial assumptions that sounded so good when we were assuming them might be inadequate okay so the response to anomalies is to expand and or change your assumptions and for most of this class what we're going to be doing is thinking about ways to change assumptions about outcomes about payoffs about the nature of interaction we're not going in this class to learn how to do the the idea to implement the idea Lillian raised of looking at choices that have a large number of values we call this a continuous choice okay if you're interested in it there are sections in the book that you can read about this is a part of game theory that's not all that hard it is something you need calculus for okay but most you guys I think probably did take calculus and if you took it you probably remember that a big part of a calculus class is finding an optimal value right finding the value of x that makes y the highest or the lowest these kind of continuous choices are that kind of calculus problem okay we're not going to do it in this class it's not that hard if you're interested in game theory and haven't taken a calculus class that's something you should do because then you cannot approach this set of interesting problems alright so a little bit of clean up here I don't think I need to erase my tree and I want to still looking at this game I think I am you know the colors back to the payoffs that we were initially motivated by or I guess maybe the payoffs that I started with at the beginning of the game where this is I mean I'm going to use the numbers again the ten for winning office and not fundraising eight for winning office and fundraising three for not fundraising and not winning one for fundraising and winning this just from week one let's put those payoffs back in so we had eight one here eight three here three eight and ten three those pants look right to you guys three cases of somebody winning office and fundraising so they get the second highest outcome to cases of people not winning office but not raising her three cases like that were people to challengers and one incumbent lose the election but don't raise funds one really horrible case through the Challenger of losing the election and raising funds one really terrific case for the incumbent of winning the election and not having to do any work with those numbers those numbers put us in the red equilibrium case right here and he gave us these payoffs and he was a week ago today we kind of as a side point observed that this seemed like sort of a mediocre set of payoffs in the sense of comparing it with this one actually think we first made the observation in the context of that left middle right game we were playing okay what is annoying about the rollback equilibrium in this form of the game is the outcome is inferior in the sense that we could see a better world here we could see a world that's no worse for the Challenger okay the challengers no worse off just give me a payoff of three in both cases and the incumbents better off okay so nobody's worse off in this scenario somebody's better off and we're not there okay when we observe that kind of thing it should bother us okay so the sense in which this outcome is inferior is because the incumbent could be better off could have a higher payoff without the Challenger being worse off so what we're doing now is were segwaying to evaluating the outcome of this game and what I'm saying is I need valuating it out coming I'm saying it's not very good for this specific reason okay this kind of thought experiment this way of evaluating what's going on by just asking the question could we make things better for some of the people in our group without hurting anybody okay that's a pretty it seems like a pretty innocuous idea to do that when we're using that kind of thought experiment were using ideas associated with 19th century social scientist I guess sociology is kind of glammed paredo as one of the founders of their discipline but his ideas are very influential in economics and political science as well because this kind of experiment this kind of thought experiment here is a Pareto comparison when I'm saying that the incumbent could be better off without the Challenger being worse off other ways to say exactly that is that this equilibrium outcome is I'll do it a negative not Pareto efficient other ways to say the same thing I'll write them over here the equilibrium outcome is Pareto dominated okay or there is possible Pareto improvement okay so the whole set of terms that all involve the same idea of asking can I improve things for at least one person and not make things worse for anybody okay if that's true if that's true then the situation you're starting in the situation where you could make things better for somebody else without making them worse for all the other players that situation is bad okay you're in a situation where you could help somebody without hurting anybody else and you haven't done it okay that's what makes it bad one of the things that is maybe a little bit confusing about the idea of Pareto efficiency is that it's when you first hear it you can get the good and bad easily mixed up you could say hear me say look we can make one person better off without making somebody else worse off and it sort of sounds like look at all these opportunities we have isn't that a good thing okay that's not quite right it means look at all these opportunities we have and we haven't done it yet that's the bad thing okay so here's an illustration of that this is an illustration that is unfortunately part of my life it might be part of yours too this is an illustration from driving in LA okay so it happens to me when I take one of my shortcuts none of which are very short but one of my shortcuts going from UCLA to where I live in South Santa Monica is I go west on Sunset and eventually turns south okay and at the end of the day that's not too bad the westbound traffic on Sunset is not terrible the eastbound is really bad okay all those people who work at Santa Monica and live all over the place are are going that way so here's me I'm driving West you guys are all you see on cars and you're stuck you're not going anywhere okay at some point I'm going to need to go south okay I'm gonna need to make my left turn and one thing that can happen is lilian sorry I'm gonna pick out you here I'm gonna Vic Lily in the bad guy but she's not a bad guy at all Lily in here is right here blocking Bundy that's where I want to turn south okay now Lily would just let me in she's not gonna be any worse off what's your name Darrin is right here Darrin is blocking her way she's not going any farther and Darrin's way is blocked in all of this okay so she lets me turn she's no worse off I'm Way better off I've made my turn I'm heading south I'm picking up my kids it's all good okay so if she lets me drive through here then we're in the world of Pareto an efficiency then all you guys are still stuck and it's too bad we'd like to make you better off but there is no way to do that but that Pareto improvement has been made what's bad about this the situation where I could be made better off without hurting any of these guys what's bad about that is the good part they're not letting me turn they're just right there on each other's bumper okay so that's a Pareto inefficient situation it's when somebody could be made better off and we're not doing the thing that would make them better off even though it doesn't hurt anybody else we like these Pareto criteria in the world of game theory or any of the kinds of political economic analysis where we're representing pay payoffs with numbers because as I said a couple of times we don't want to think we know more than we do with these numbers and in particular we don't like comparing one person's utility to another okay because we really don't know if something hurts one person and helps another it's very hard often to measure whether the increase in utility one person has is truly offset by another's decrease in utility this is a problem we have in my household where both parents actually have PhDs in economics so we talk in these terms a lot but as far as I could tell my husband's utility is very very sensitive every issue matters a lot to him and actually in economics as a word for somebody like that a utility monster okay can't lose on anything okay and I just not like that there's actually a lot of things I'm indifferent between my utility is not measured in the same very very fine units that my husband yes okay so one sense if you're really going to be utilitarian yeah the world would be a better place if he won on every issue but guess what I don't really like that okay so I don't like those kind of interpersonal comparison that of we're getting some laughter here maybe other people have found themselves in similar situations sometimes so we try to stay away from those situations where we are trying to do these comparisons of utility where not only do you really not know whether my husband is that much more sensitive to me you also know whether he's even telling the truth or not that suspicion oh come on do you really care that much about raisins in the oatmeal so if we want to avoid interpersonal comparisons the Pareto criterion lets us do that it's not saying anything about how much anybody's helped okay it's just looking for cases where somebody could be made better off a little better off a lot better off doesn't matter for the Pareto criterion without hurting anybody else okay so against it's nobody else's utility is changing we don't have to worry about that okay so the problem here the general version of what might be disturbing about this outcome here is that it is this outcome Pareto inefficient okay here we are in the situation of the incumbent winning and fundraising the incumbent would be better off and the Challenger would not be worse off if fundraising was banned okay the election would come out the same nobody we have to raise funds wouldn't that be good okay so from the incumbent and the challengers point of view this Pareto inefficient outcome should be annoying one of the main uses of game theory is to identify situations that are going to systematically give Pareto inefficient outcomes so that we could think about ways out ways to do something that would improve things for one or some of the people in the interaction without hurting anybody yeah so tiaras question is if we move from the ten three node to the a three node that would be worse yes yes okay so what I'm gonna do now is I'm gonna expand on that question um by writing a few more things and what I think I'm gonna do is let's put letters on these terminal nodes okay so we've got node a node B note C easier to talk about them and if we're talking about the payoffs or even the path you late if not possible in the sense of so let me just for the Mike thing let me repeat Elaine's observation which is very good that in one sense 10-3 outcome D is possible it's better it's something that could happen what Elaine is saying is if you think about people being strategic it's not really possible okay usually in Pareto analysis we take behavior out of that okay so when we're evaluating with an outcome is Pareto efficient or inefficient we don't just think about what is possible along the equilibrium path or even as part of an equilibrium strategy we really think about all the possible outcomes in the game that's an important point you guys on your homeworks are going to be asked to decide whether things are Pareto inefficient or not and I'm going to belabor that point in a minute and when you're asking that you want to be sure to compare the outcome you're interested in it's usually the equilibrium outcome to all the other things that could happen even if those things aren't the optimal choices people another way to approach that thing is yes if they're all being strategic and rational we'll never get to this Pareto dominating outcome and should we think well maybe they shouldn't be so strategic they'd be better off if they weren't so strategic that would be the another way to think about why we're including this outcome even though we think we'd never go here sure with these payoffs we don't think we'd ever get any of the other nodes okay so just kind of wrapping our brains around the lingo here and asking first is outcome V Pareto efficient actually I've said here it's not Pareto efficient I and by implication I think I've said what it means to be Pareto efficient but I've mostly talked about what it means to not be Pareto efficient so over here let me write down the difference the definition for Pareto efficiency outcome is Pareto efficient yeah there is no other outcome in the game or which one players hey is higher and no other is the reason why I first define Pareto efficiency in terms of the negative is is it's actually easier to show that something is is not Pareto efficient than to show that it is okay to show that it's not Pareto efficient all I had to do was find one other outcome somewhere in the game okay just had to be in the game it didn't have to be along any kind of equilibrium path some other outcome where one players payoff was higher and the other players payoff was no worse in order to find out whether an outcome is Pareto efficient we have to do all those possible comparisons and conclude that no there's no other outcome that that would be true for okay that is why that's why we need such a big vocabulary for our parade Oh words Pareto efficiency is kind of the main thing we want to get to that but because understanding whether something is prayer too efficient or not can involve making a lot of complicated comparisons we have a couple of other phrases that help us keep the steps organized okay so here's a Pareto efficiency definition a shorter definition here is outcome X Pareto dominates outcome y yeah at least one players payoff in X is higher Y and no players payoff X okay so in order to say that we're at a place that's pretty too efficient in order to say that we're at a pretty good place X here we have to compare that outcome to all the other ones and see if any of them Pareto dominates it if we get to the end we've tried to see if any of them are Pareto dominating and none of them are then we could say okay yes it truly is Pareto efficient adding a couple of other forms of the phrase here so if you actually look let me add something down here the synonym for Pareto dominates is that X is a Pareto improvement over okay X is improvement over Y okay so two phrases experto dominates Y or X is a Pareto improvement over Y these are phrases that allow us to compare two specific alternatives Pareto dominates and Pareto improvement okay you know just again let me emphasize here these ideas allow us to compare one outcome with another right Pareto efficient and the other thing that you'll hear that really is synonymous Pareto optimal these allow us to evaluate one outcome so the way we evaluate one outcome we pick the one that we're interested in and when we use the predominance Pareto improvement idea to compare it to the other outcomes evaluating one outcome comparing two outcomes going back to these definitions we've got the Pareto improvement Pareto dominating relationship here as you look at the payoffs associated with each outcome to see if one of them is higher for one player or not lower for the others once you've done that putting the Pareto dominant Pareto improvement idea into this definition we can kind of compress this an outcome is Pareto efficient yeah so just another way to restate this there is no possible Pareto improvement it would be one way to restate this whole definition we statement one another way to say exactly the same thing is there is no outcome purrito dominates okay so the way I got into this business of writing out all the definitions and kind of the the closed synonyms that you'll hear was in response to Kiara square out okay so what are we saying when we're what are we asking when we're asking whether outcome be here is Pareto efficient when I'm saying it's not the sort of why of my answer is no because D is a Pareto improvement okay or because D Pareto dominates it okay how do I know it's a Pareto improvement what level of explanation would I like to see you guys use if I was asking you about Pareto efficiency I would like I'd like you to say what the Pareto improvement is so let's say that okay and D incumbent has a higher payoff and Challenger has same now what I think Kara might have been wondering about I'm not sure is have we said anything about whether outcome Diaz Pareto efficient or not that was what you were wondering right and the answer is we have not and that's one of the slippery things okay we've said that it's an Imperator improvement over B and that's not enough for us to say the D is Pareto efficient okay it's actually going to turn out that it is okay and that was your insight to that there wasn't any way to do better for one player and not worse for another here but to do that part of the logic we have to start with D and compare it to all others so the last thing we'll do for class today is that precise question so it's a different question when we asked about one outcome being Pareto efficient we're not necessarily getting we won't get the full answer to whether the other ones so is D Pareto efficient okay well if it's pre efficient what I have to be able to show you is that there's no Pareto improvement so I have to look for all the possible places there could be a parade of improvement and show you that there's not one okay well there's three places I have to look point one does a Pareto dominate does it certainly not a the terrible outcome is worse for both players okay no a is worse for both does B create a dominate well no we just found that out above that the incumbent is worse in be so no worse off the outcome be less outcome here does C Pareto dominate D does he pray to dominate G no C is an interesting outcome C is one where the Challenger has a higher payoff and C and the incumbent has a lower payoff high but when one player has a higher payoff and the other one has a lower playoff neither can Pareto dominate the other okay it means that there is a zero-sum conflict there no only after I've done this and if it was a bigger tree with more nodes I would have had to do it for every term whole node once I've gone through all the other nodes in the game and said no this is not a Pareto improvement this is not a Pareto improvement this is not either now after doing this whole thing I can conclude yes this was kind of the intuition that Chiara had this is the logic that we need to say yes D is Pareto efficient so the thing I want to leave you with is to juxtapose this was kind of a lengthy set of comparisons to come to the conclusion that something was Pareto efficient to conclude that something was Pareto inefficient all I had to do was find one comparison where one other outcome Pareto dominated that so we'll pick up the concept of Pareto efficiency in the context of a new game next week
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_2_UCLA.txt
okay I have um yeah three announcements to make first of all sections are going to meet this week except for June haunts sections on June is out of the country until next week I believe if you're in his section you've already heard from him his sections will not meet those sections our section 1 F 1 J and 1l it's the Friday 9:00 a.m. one one of the 11 a.m. ones and the 1 p.m. one on Tuesday the tas names were not yet posted by the sections on the course web page now they are ok so if you're not sure which TA is yours you just need to look to see what section number you're in go to the course webpage and you'll see all the TA name ok so June students wait till next week everybody else students of Adam Florie and Emily go to your sections this afternoon or tomorrow ok second announcement is some updates about enrollment so actually two pieces of information there one expected one a little bit new as expected I was able to wait let in the waitlist so if you were on the waitlist as of like 9 o'clock this morning or so you're in the class you're fine what I didn't expect is that in letting in the waitlist I completely closed the class ok and that that is how it is it's actually fine letting in the waitlist has this way over the limit for both the room and the number of TAS we have it means if you are not on the waiting list your chance of getting in the class would from very low to zero ok so if you're in that situation now there's really no point in continuing to stay go look for another class and I hope to see you in some future Poli Sci 30 last announcement concerns those of you who are enrolled in the AAP program the academic advancement program I think is what aap stands for anyway it's a tutoring program I think if you're eligible for it you know the signups will have started today they'll continue through Friday of third week let me put up the contact information for the AAP tutor his name is Chris Chris jau chris has a class himself that at this time so that's why he's not coming to make the announcement but he's got a sign-up sheet in Campbell Hall I'll put his email up here and his cell phone are EGU s and you Jeannie calm six to six five three three two oh five eight okay so as I said on Tuesday I'm just going to jump right in and set up a game and solve it for you and in doing so I'm going to expect I'm not going to expect you to necessarily follow every little step' or see the logic why I'm doing everything but if I think if I go through a complete example and then go back and say okay why did we do it this way not that way that it'll just be a lot clearer rather than talking an abstract we'll have a concrete example to work from okay so that's that is what I'm going to do and in doing that over here I'm going to put the Holy Trinity of game theory okay every time you set up a game every time you take a situation in the world that you're wondering about and try to put it into the game theory form to analyze it what you are doing is you're looking for three things okay I don't need to write it again you're looking for these three ingredients what are the preferences okay preferences have of really two components when we say we're identifying preferences we're really identifying who are the players okay who are the different people with different goals who are both able to affect the outcome okay and what do they want the second thing we want to identify specifically are the actions or the strategies those words are not the same thing and in the course of today I hope you'll get to see the differences between them in the relationship that they have but right now what we're wondering about is what can each player do what choices do they have notice there's an order here first we decide who the players are then for each player we think about what do they want for each player what can they do what choices can they make the third ingredient which is often the most work in setting up a game is to figure out what the outcomes are okay what can happen in this situation not just what we think is going to happen but what are the other possible things that don't happen something that you're going to I hope see very clearly this week and next is how important things that don't occur are for our understanding of what really does occur if we want to understand why sometimes there's a war we have to think about what else could happen okay to the other side if we want to understand why there's peas we have to understand why sometimes there war okay so whatever we're seeing in the world if we want to understand why it happens we have to think about what else could happen okay so the key with outcome is all the possible things that can happen not just what really does happen the other possibilities the roads not taken the dogs that don't bark those are going to large in our understanding of strategy and what happens is tragic situations okay all right still pretty abstract here let's switch gears and start talking about a specific case with specific players preferences actions and outcomes okay so what we're going to do now is did I leave point B off my outline yeah I did well I'll add it right now I think there's space I'll add it in a different color and it's fine point B is this nice example all right point B will do an asterisk here I'll send that up skyward for a while point B is our example okay this example is about incumbents incumbent politicians challengers the people who are running against the politicians already in office and I will say here is a campaign strategy campaign strategy is pretty broad really what I have in mind here is fundraising strategy okay how many people here have um been involved in any kind of campaign a good number okay um those of you who are involved was there more or less fundraising than you expected was that a bigger or a smaller deal way bigger right and that's that's you normally the experience people have when they first get involved in politics is they didn't believe how much time they were going to spend asking other people for money and it's not something that most people like to do very much including politicians okay so even if you talk to very seasoned very professional politicians people who've been in the US Congress for years been through lots and lots of campaigns talk to them about what they like and don't like about their job you know they're politicians they're pretty upbeat people they'll have something positive to say about just about everything but they are hard pressed to say that they like fundraising okay it's a lot of work it's tedious it's thankless in many respects but it's a huge huge part of politics okay well there's this several puzzles about this and one particular puzzle moons especially large in the context of congressional elections so in this example I hope you see that this example is becoming more and more specific as I go that's off going to be a feature of this class in this example the incumbents we're thinking of our members of Congress it applies pretty well to members of state legislatures as well in particular state legislatures like California where on the state legislature has a lot of power and is it's a pretty good job to be a member of it okay so these incumbent legislators and the challengers are the challengers who run against a sitting incumbent now I'm really emphasizing that the story I'm telling here is one that pertains to elections in which there is an incumbent okay not to open seats okay strategies campaign strategy and fundraising strategy also in open seats is different okay but what we're going to look at here is just fundraising strategy for incumbents okay and what I'm going to do is I'm going to talk to you a little bit about puzzle and the puzzle is the fact that you guys just told me that there's so much fundraising that goes on by incumbent members of Congress they really spend a lot of time doing it ah even outside of the campaign season two or three week nights every week if you're a member of Congress you're going to some kind of fundraising dinner um because that's that's what you need to do and what I hope you're sort of wondering now is do they really need to do that in some of the other classes that you've taken if you've taken courses in American politics and Congress especially you might have heard about incumbency advantage okay the idea is that once these guys get into office we usually think that they've got an advantage against their Challenger just by being a member of Congress they get more media coverage they get much more exposure to their constituents the fact that they were elected from the district maybe says something about their general fit to the district that they are demographically and politically appropriate for the district do they really need to raise all those funds especially when they say I don't like to do it okay what's up with that well here's a case where if you think about what I was talking about at the end of class on Tuesday about economic versus psychological explanations this is a case where the economic the game theoretic explanation of why incumbents are such insatiable fundraisers is going to be different from the psychological one a psychologist might look at that situation and invoke ideas like have it you know they start fundraising early on and they just get in the groove and they can't get out link to that idea you might have run across terms like learned helplessness okay you just kind of you learn to do it one way and then you're helpless to switch and as I indicated on Tuesday I think there's something to it that might be the answer it's not the only and what I want to show you today is just the existence of more fundraising than would seem to be necessary the existence of that by itself should not be evidence for the psychological model okay so we don't have to immediately conclude that the only explanation for all the fundraising we see by incumbents who don't seem to need to do it that the only explanation for that is blinders habit the inability to see a better way I'm not saying it's not that but it doesn't have to be okay the way I'm going to make that argument now is by making some specific assumptions on these three points what are the Preferences what are the actions and what are the outcomes okay and in doing that I'm going to set up a gate something I want to point out right now is that in my little narrative right here I've already very informally said some things about who are the players well incumbent and Challenger what do they want they want to win office that's what I'm assuming I bet that's what you guys are assuming - what can they do well they can raise funds or not they could raise funds to a greater or lesser extent as well but I think we're going to think about it is yes/no here and what are the things that could happen well you know one person could win the other person good way that's it in a nutshell whenever we talk about politics we are making assumptions like that assumptions about who's in the game what do they want what they can do and how their different actions how the actions they take affect the outcome we always make assumptions I really want to be clear about that what game theory forces us to do though is to be very conscious of our assumptions we always make assumptions but we don't always make a big deal out of it okay why forcing us to make a big deal out of our assumptions game theory keeps us honest about them okay game theory might make us feel embarrassed if we assume that people that agree with me are smart and considerate and thinking about the good of society as the whole and the people who are running against me our evil and stupid that's not an assumption that we maybe necessarily want to make it's an easy kind of assumption I'm caricaturing it a bit it's an easy kind of assumption to make when we're not self conscious of it okay I'm emphasizing this point now because some of the resistance that people have to learning game theory some of the resistance to learning a style of analysis that isn't our natural way of thinking again reminding you of what I was saying on Tuesday about natural versus deductive ways of thinking you guys might find yourself saying all she does is make assumptions all the time okay if you say that here's my response then you say that to yourself self say back oh but don't we all make assumptions all the time and she's just telling us that's what I hope I'm doing and that's what I hope I teach in you to do okay not to make more assumptions than you otherwise would actually I think that the more self-conscious you become about making assumptions the less you'll do it or the more careful you'll be about doing it but again just it's setting up the story my story had a lot of assumptions in it what I'm going to try and do now is organize those assumptions and make them very clear okay so let's let's get right to those assumptions okay actually before I do assumptions what I think I'm going to do is the the game freeform which is going to it's a way of helping us organize our assumptions okay so this scenario that I've been talking about I'm going to argue fits a sequential game and everything that we're going to do for the first two or three weeks is going to have a sequence to it first one player does something then the other player sees what they do and reacts to it okay so this part of the class is going to be about strategy in the sense of anticipating how the other player is going to react to your own choice okay when there's a sequence the player that moves first should think not just about what they want to do okay do I like this choice or do I want that choice but also to think about how the other player will react to it okay to not think about the other player's action is to fail to be strategic okay when there's a sequence when there is a sequence to the real-world situation we're trying to understand the way that we depict strategy the picture that will help us organize all of our assumptions and figure out what they imply is a game tree time so what I'm going to do is I'm just going to draw one the reason why this scenario has a sequence is that incumbents move first in this story if you're incumbent you become an incumbent early November the election year okay in most cases you don't know who your Challenger two years or if you're a senator six years in the future is going to be okay you can start doing a lot of your fundraising and indeed incumbents do start doing a lot of their fundraising before your challenger is even chosen okay so incumbents make their decision about how much fundraising to do before the Challenger is even in the game okay that means that I'm going to set up a tree that looks like this okay I'm going to draw the whole thing and then I'll talk about it the incumbent has a choice to raise funds not okay this we're going to do colors on this so the green is the game tree the red is commentary on the game tree this is a decision node nodes are these things that branches come out up okay so the incumbent decides to raise funds or not then the Challenger decides whether they are going to raise funds I'm abbreviating here RL raise funds here same thing okay I'm about half way to setting up a game tree here I've 90% of a way of setting up a game tree in terms of the amount of board space I'm going to cover paper space for you guys barely short of halfway in terms of the amount of work I need to do okay but let's talk about what I've done so far what I've done is I've just represented each player with a decision node okay the decision that happens first comes higher in the tree the decision that comes second happens later the incumbent has a choice of raising funds or not and then the Challenger has that same choice the fact that the Challenger has two decision nodes means that the Challenger gets to decide her fundraising decision regardless of which choice the incumbent makes here okay so the way the game gets played out where the Challenger is either going to make her decision about fundraising or not knowing that the incumbent has raised funds or she's going to make that same decision knowing that the incumbent has not okay the Challenger won't end up at both of these places just one and where she ends up depends on the choice of the incumbent we will later see games where the choice that the first-mover makes early on is going to affect whether the second mover has a choice okay and what sometimes what kind of choices they have but in this game it does not okay so again let me emphasize the Challenger has two nodes in the game but the Challenger will only make one choice and again that's the idea that when we want to understand what happens we have to think about what doesn't happen okay so if what does happen is that the incumbent raises funds we have to think about what would happen if she didn't okay what would happen what the Challenger would do if it worked out the other way and the incumbent did not raise funds we would have to still be thinking about what would happen if the incumbent had gone the other way so so far what we've got is a game tree that has nodes that represent players and branches that represent the actions the actions that they can take I am going to throughout this class white game trees top to bottom that's just the easiest way for me to do it um how I learned to do it the book does it from left to right okay same idea either way the natural way that you would read and what is at the top of the diagram or to the left in the book is what happens first in the sequential game you'll see that it's very important in many games to know who moves first and who moves second all right so some of our assumptions as I said are actually in the game tree who are the players is in the game tree next Tuesday I'll be giving you a homework problem it's going to be a little story it's not going to be about incumbents and challengers but it's going to be a political story and you're going to have to figure out who the players are and one of the things you want to ask yourself when you've got your game tree maybe written out just to this extent not finished yeah is do I have all the players if a player can make a choice that affects the outcome they have to have at least one decision node okay so if you're some of you may find it helpful to compile a little checklist of things to ask yourself to know whether you set up a game right and one thing that should be prominent on that list is does every player have at least one decision node if the player doesn't have a decision node and they can affect the outcome you've left something out of the game conversely if somebody's got a decision node and their choice doesn't really affect the outcome that would be a problem too all right so now what I need to do all I need to do is to add what we call terminal nodes in here the other phrase that we'll use for what today are going to be numbers soon they'll be variables that I place at the bottom of the tree is there going to be payoffs okay what we want to do is we want to add something at the bottom of each path in the tree there's 1 2 3 4 possible paths that this interaction could take right the incumbent could raise funds and the Challenger could the incumbent could raise funds the Challenger could not the incumbent could not the Challenger could the incumbent could not in the Challenger could not four possible paths we could go down what we want to get to is our assumptions about the preferences of each player over each of these 4 possible outcomes ok so another checklist thing that you can remember is that when you're thinking about all the possible things that could happen the number of possibilities that you're considering ok the breadth of your analysis if you will is going to be given by the number of terminal nodes the number of different ways to get from the top of a tree to this bottom row ok so we've got four here it's four possible outcomes that could happen the funny thing about outcomes and setting up a game is that figuring out our assumptions about outcomes is the most work ok it's where we really do have to think about counterfactuals and we have to think about them in a lot of detail now again this is something that we do in ordinary life too we do it sort of intuitively but we usually don't do it completely and we're often not really explicit even with ourselves about what we're assuming about the counterfactuals what we observe you know we can observe that we can pay a lot of attention to it what we think might have happened harder to think about harder to communicate with somebody else about and game theory can be very very useful in communicating to somebody else what your ideas about a counterfactual are versus their ideas so outcomes and setting up the game thinking about the possible outcomes that would be associated with each set of choices by the players is where most of the hard thinking comes in the funny thing about it is the outcomes themselves never show up in the game okay I'm going to make some notes about it here but what we want to get to here are numbers that will indicate not the outcomes not necessarily who wins but how the players feel about it okay do they like the outcome or do they not like the outcome okay so that's where we're going to get there let's start thinking about what assumptions we might make about outcomes okay now here's a place where I'm going to tell you guys a set of assumptions that I have I think they're pretty realistic you might not think that they're all realistic whether they all apply everywhere and that's good hold those thoughts today is not a good day for us to talk about whether these assumptions are the right ones or not Tuesday will be okay so well well I'm doing this and you're thinking no I don't think that's right write it down bring it up Tuesday but right now it's Professor blonds game she's going to make her assumptions all right so the assumptions that I have I think I'm going to do them over here give me my assumptions about outcomes okay assumption number one if the incumbent raises funds she wins the incumbent raises funds she wins whether the challenger raises funds or not okay and I'm making this assumption based on what I'd said a few minutes ago based on what people who've studied congressional elections for years think that a well-funded incumbent encumbered who has enough money to mount a serious campaign has a large advantage and is probably going to beat a well-funded challenger or a poorly funded in challenger okay so getting one more color up on the board if the incumbent raises funds I'm writing this in blue because as I said a minute ago it's not really part of the game tree okay but right now it's going to help us get to what is at the bottom of the game tree which is the preferences over these different outcomes so if I'm the incumbent I raise funds you the challenger raise funds I beat you I'm the incumbent I raise funds you don't I still beat you comma wins both places to incumbent does not back to my abbreviating raise funds the Challenger wins oh excuse me not not necessarily that's I wouldn't want to defend that assumption the incumbent does not raise funds and and and and Challenger does then the Challenger wins okay third assumption in this category is um neither candidate raises funds then we're back to the incumbent winning okay so I said it's my game I get to make my assumptions but if I'm using this game to persuade you that I know something about congressional elections I should persuade you that these assumptions are reasonable to make the idea here is that if you have an incumbent that doesn't raise funds an incumbent that's not in a position to send out mailers um have people go door-to-door put on TV ads if it's going to be that kind of campaign is going to be at a disadvantage if the Challenger is doing those things okay if I'm an incumbent who has failed to raise money I can't put my TV out there and you're the Challenger you got my two years of votes to go over and spend the worst possible way and get them out there and the mailers and on the airwaves and that's gonna be real good for you okay a disadvantage that incumbents may have even though we don't really see it a lot is that incumbents have records to run on and records are an easy thing for a challenger to use against incumbent choices that if the incumbent could carefully explain to their constituents why they voted one way why they fail to show up for a vote why their office works this way and not a different way they won't be able to make that explanation if they don't have the money to get their message out if the Challenger has money to get out the message about what's wrong with the way the incumbent has behaved and the incumbent does not have the money to respond this is a case where challenger could overcome the advantages of name recognition fit to the district whatever talents the incumbent used to originally get elected if neither candidate has money though neither candidate has the funds to really mount a big campaign then we're probably back in the situation of name recognition mattering it's probably going to be a low turnout vote and the incumbency advantage will probably prevail he'll okay so let me put those outcomes again in blue part of my assumptions part of the process of setting up a game even though they would not normally appear in the game tree itself okay so this path here the one where the incumbent does not raise funds and the Challenger does this is the one where the Challenger wins over here um maybe another way to justify that assumption this is another path where the two candidates are sort of uneven ground here we have two candidates who maybe are not funded to the same degree but both have the funds to mount a campaign here again there's sort of uneven grounds neither of them having any money when the money situation is even we think that incumbency advantage is what's going to matter okay so over here we've got the incumbent winning okay so let's go through our checklist here um we've got sort of half of what we need for preferences we know who the players are check what can each player do we've got our assumptions about that check that can raise funds or not this is just kind of a restatement what are all the possible things that can happen well we've got them income it's going to enter the Challenger is going to win what is left is going to be the Preferences of the players and you might be thinking that that's kind of a big term for me to be using a big set of assumptions to make it would be except that when we're analyzing a game all we need to know is what are the preferences over the outcomes in the game okay so not in general what do these guys want you know peace on earth a new car also winning the election and no more fundraising but just what are their preferences over these four possible outcomes that makes it a little bit simpler it makes it a more realistic thing that we can actually talk about okay and we'll put those assumptions I think over on the other board no I think actually I do have board space here is this is this good board space for everybody is everybody able to to see this okay good good all right so I'm still an assumption land over here I've got my assumptions about what the outcomes are and again let me emphasize what I mean by that what I really mean is how the different combinations of choices by the players affect the thing that they both care about okay so when I say what are the outcomes really what I want to know is what outcomes result from each combination of actions by the players okay yeah that's good check on that next is going to be assumptions about the preferences of candidate ah that's the incumbent the Honorable representative I okay and what I'm going to do is I'm going to represent incumbents preferences with numbers and in the most natural way to do it a higher number means you like it better okay this is the idea behind payoffs when people use game theory to set up experiments in laboratories and this is increasingly done in political science economics psychology it's done throughout social science um laboratory methods are kind of permeating out from psychology where they initially were when people try to set up little lab versions of these strategic situations in a laboratory the payoffs are often real money okay occasionally their M&Ms or something like that but uh every experiment that I've seen done in recent years the payoff numbers are represented by money have any of you guys participated in any of the experiments out on CASL you even know what that is I'm mentoring around what Castle stands for CAS SEL so social science experimental laboratory I don't know what the CA is it's in the public policy building it's kind of a famous um laboratory set up to do game theoretic experiments and you know they do pay you they're always looking for subjects so if you find yourself in public policy you might see if there's a an experiment to sign up for and get some payoffs of that that's but that's a digression all I'm saying here is that when we analyze a game we represent preferences with numbers high number is good low number is bad so let's represent my assumptions about the incumbents payoffs with some numbers here alright so again I just have to represent assumptions about these four possible outcomes okay and here's how they're going to go okay these are the incumbents preferences and they emphasize that again whenever we talk about preferences we are always talking about the players separately okay always always always game theory has this kind of divide and conquer approach that's how we understand a complicated situation we break it down into components and one way we break it down is we think about preferences being those of the individual not of the players as a group okay so incumbents preferences over the following possible things I I'm the incumbent wins I does not raise funds that's the best I get to be in office I get to make the world a better place and give all those big speeches and no more chicken dinners for me that's going to be the biggest number here next thing incumbent wins incumbent raises funds well that's not so good like I'm telling those political scientists call me up and give me the answer surveys I really don't like raising funds but this number eight this payoff number eight that I'm assigning to it is better than the next outcome which is that the Challenger wins and the incumbent does not raise funds okay these two numbers are key in saying that I have a high being the incumbent I have a higher payoff for winning and fundraising than I do for losing okay challenger wins means incumbent loses and not fund raising says that if I have to raise funds to win I'll do it I'll bite the bullet okay the worst outcome here is gosh what could be worse Challenger wins and I raised funds oh my god hate that okay so those are my assumptions about the Preferences of the incumbent and I'm going to assume that the challengers preferences are symmetric okay but from the challengers point of view the Challenger winning without fund raising is the best the Challenger winning so it does not raise funds here's the best not-not-not challenger wins with raising funds challenger raises funds that's eight okay so it's just symmetric here I'm replacing the role of the incumbent with the Challenger because these are my assumptions about the challengers preferences best cases I win without raising funds next best as I win I have to raise funds next best case is my opponent wins okay I lose incumbent wins at least I didn't raise funds okay free and the worst case would be incumbent wins I spent all that time fund raising that's my lowest payoff okay so where do these numbers come from I made them up I made them up to reflect my assumptions about what these guys want to reflect the story that I was telling when I first developed this idea okay that these guys both want to win the election that nobody likes fundraising the assumption that I didn't really make what I was really not very clear about when I was just talking informally about the problem over here was I didn't really commit myself to what I thought about the differences between these two intermediate outcomes okay do they prefer to win the election and raise funds do they prefer one thing that they like and another thing they don't like to losing the election and not raising funds okay I think this is a reasonable order to rank these two outcomes that if people are in politics certainly in politics is a certain level they probably are people who have decided that winning elections is worth the trouble to raise funds okay but the point I want to emphasize here is this was a part of the story that it was very easy for me to just completely glide over when I was talking about an ordinary language for me to not be clear about um and this aspect of my assumptions about preferences is going to turn out to be critical to understanding what's going to happen okay so now I'm actually getting pretty close to being done setting up the games okay I know what the outcomes are they are in here in blue to indicate they're not really part of the game I setup those outcomes first because that knowing what the outcomes are is necessary for me to know how many things I have to assign a payoff number to so do this every time you get your homework when you're setting up the game name get yourself a list of the outcomes while you're doing it it's kind of natural to think about what combinations of actions they would come from but figure out what the set of outcomes are before you sit down and organize your thoughts about what the preferences are you need to do the outcomes first because you need to know what are the set of things you need to assign numbers to okay so now I've got my numbers let's put them in the game but first let's get this question the Challenger winning and the incumbent good point that's right that's not that's not in there okay that's very good that that's exactly right now the parallel one is going to be in here okay so that's right and it was a good point for you to bring that up let me just remember that I'm supposed to for the webcasting repeat your comments and time on the mic and you're not so what's your name Britney points out that Britney is thinking about where am I going to put these numbers okay these are numbers that are going to go with different outcomes there is no outcome in this tree where the Challenger wins and the incumbent raised funds okay the only place where the Challenger wins is in this branch of the tree where the incumbent didn't fundraise okay and that's actually right so if this had been kind of a hard part of the preferences for me to think about my assumptions and get clear about I could have just skipped it I'm not going to need this okay now it turns out that the parallel part of this is going to be in the tree and an aspect of this set of assumptions that I like is the fact that I'm treating the incumbent and the Challenger the same and not presuming that there's anything special about one or the other okay so it wasn't too hard for me to put this extra preference in here but you're right I did a little more work than I need to here I was there another yes you're also what's your name Veronica is also seeing another impossible case here that the Challenger winning without fundraising is not possible that's exactly right that's exactly right okay and well you know what you guys are you guys chomping at the bit to put the numbers in here so let's put them put them in here what you'll see is I've got all of the assumptions I need I've got a little bit more I've got a little bit more than what I need for the game I've got quite a bit less than what you might be thinking about is going on in the real world and that's going to be a tension that we're going to have throughout the class our games are much simpler than reality okay that's the way we understand reality reality is such a you know hubbub of so many things going on that we have to focus our attention on some parts of it and game theory again just makes us conscious of what parts we're focusing on okay so my main game has quite a narrow focus I got out of my focus a little bit over here as really as part of the process of explaining the logic behind the the assumptions but now I that I've got my numbers let's put them in and um well I'm doing that let me tap them all on to you something that is true about this class this time and this professor which is it's 8:00 till 12:00 now and I'm um my IQ starts to decay rapidly about 50 minutes into a class period okay and this is devastatingly embarrassing to me but it can be fun for you and good for you in terms of learning so I would actually snake at the 11:45 rule start being very sensitive to me putting numbers in the wrong places saying things backwards keep me on a very short leash make sure that I put the numbers in the right places here you'll have lots of rewarding chances to catch me doing something different okay alright so I've got a number I've got to pay off for each player for each possible outcome and a few other outcomes they actually aren't in the game and what I need to do is put those in the terminal note the convention what everybody does and you guys do it to please is to put the first mover's pay off first okay everybody does it this way okay so the incumbent has the first move I'm going to put the incumbents payoff numbers first the challengers second if there's three players they just go in sequential order okay it's you could do it the other way just like you could drive on the other side of the road but do it the way everybody else does you'll be glad ok so what can happen here the incumbent can raise funds the Challenger can raise funds the incumbent wins I'm switching back to green here because the payoffs are part of a game tree until you have payoffs you haven't finished setting up the game okay so what is the incumbents payoff from going down this path in the tree eight okay this is the case where the incumbent wins the little blue note not really part of the game tree but part of our assumptions tells us that the incumbent wins with that combination of actions and the incumbent had to raise funds okay what's the challengers payoff here one poor challenger did all that work didn't get to win the election terrible payoff all right what's the incumbents payoff here I'd say it's no different that's going to happen in many games right you think we're at a different node the incumbent preference might be different but being come into the care about whether the Challenger raises funds or not now you could make a game where they cared but in this game I'm incumbent that challengers out there whatever he's doing this thing as long as he doesn't beat me I don't really care what he's doing okay you might think that's unrealistic you might think if the incumbent is from another party I don't want him marshaling support for that party I don't want him gaining a good reputation and maybe beating me next time but in this particular model I'm assuming that the incumbent only cares about their own fundraising and whether they win the election now a more realistic way to think about that would be to assume that the incumbents feelings about their own fundraising and winning the election are just much much more intense maybe they care a little bit about the Challenger but it's not enough put in the game okay what's the challengers payoff here three okay think about that three is sort of being the challengers payoff for keeping their day job okay I didn't win the election but I didn't turn myself inside out with all that fundraising it's a three it's fine all right over here what's the incumbent three because here the roles are switched now the incumbent is the one that didn't really work very hard didn't win the election either challengers pay off eight very good over here and come and pay off ten oh man I'm the incumbent this is the world I want to be in challengers pay off okay so just to re-emphasize the points that Britney and Veronica made um this payoff never occurs we never in this game see a challenger winning when the income raised funds so the the worst thing that can happen in this game is just not going to happen to the incumbent that seems kind of unfair welcome to the world of politics and the best thing that could happen never happens to the Challenger okay the best possible outcome is a possibility in the game now we're not going to predict that happens at least not with these numbers but it is allowed as a possibility and the worst possible thing for the Challenger as well okay so here's another thing that people when they first started doing game theory find odd one thing that people find odd is what I talked about a few minutes ago that the outcome assumptions don't really appear in the game but that's actually where you have to do the most thinking about what really are my assumptions how do I think the world works it's the knowledge of the world the judgment the creative thinking that you bring to game theory okay are those assumptions about um about outcomes yet they don't really appear in the game at the beginning of the class I'll keep writing in some of the outcomes like this and I encourage you to do it on your homework too it's not like it's wrong but you'll very quickly find yourself and when you're reading the book you'll find the book not having the blue stuff in here okay so when people say write down a game tree what they want you to write down is what's in green their decision nodes at least one for each player in the game actions represented by branches coming out of the node branches coming out of the node controlled by the player who's making the choice there and payoffs in the terminal so that's one surprising thing the other surprising thing is with a lot of games the work really goes in setting them up once you set them up solving them is not that hard this game is not a hard game to solve I don't know if I can have time to say everything we have does say about solving the game but I can do the solution for you right now the most important thing to remember about solving games so important that I'm going to I can't erase that that stuff's so important but I'll switch colors I'll foot colors and put it over here by solving game trees put it in a nice blue box is you always always solve a game tree from the bottom to the top you set up a game from top to bottom or left to right you set up a game in the order that the things happen you solve the game backwards so you'll even hear people using that particular phrase and solving it backwards give us gives us this result Dixit and skeet in your book call the process rolling back so the image they have is like it there's a screen I'm going to solve this game just by kind of rolling up the branches figuring out what would happen if we got to the very last set of nodes in the game thinking about that using what we learn about what would happen at the last set of nodes to figure out what will happen at the next to the last set of nodes and in a more complicated game just marching back up through so just as with setting up the game there's this kind of divide and conquer approach take this complicated situation and break it down into components break it down into preferences actions and outcomes solving the game also has this just divide it up go step by step think about it one node at a time okay that's going to be a feature of especially the early part of the course none of the individual steps are that hard okay that's something that I think sometimes uh sneaks past students I'm talking away up here and every staff which we start seeing me ask you whether eight is a bigger number than three or don you're going to think wow this glad I did that math pretest for this every little step' is really easy okay the trick is and the power of game theory comes from organizing all those steps breaking down a hard problem into a medium-sized number of really easy problems okay so that's what we're going to do we are going to roll back the game we are going to solve it from the bottom up we're going to start with the things that could happen at the last stage of the game okay so say we're here don't think about how we got here or whether we got here but say we're here I'm the Challenger okay everything you need to know about what I want is captured by these two numbers what am I going to do what am I going to do if we get to this stage you're not going to raise funds if we get to this point I have a choice between three and one okay I have a choice between fighting a losing battle or keeping my life I'm not going to raise funds okay now watch how I'm doing this this is actually something there are some aspects of setting up games and solving them where everybody does it the same way like putting the first mover's playoff first there'll be other ones that come up when we get to simultaneous games here people do things differently what I'm doing is trying to do it the way the book does and it's also the way I learned it I'm highlighting what I think will happen okay so I'm making this big and a new color and I'm highlighting the optimal choice optimal for who the Challenger optimal for whoever controls the decision node okay so at any game you solve a node it's a part of the process of saw the game by picking the branch that leads to the higher payoff for the player that controls the node okay so in this case it's not I'm highlighting it I'm making a big deal of my style of highlighting because sometimes you will see people crossing out the path not taken okay that's another way to do it as long as you're consistent okay what about over here if we get to this point I'm the Challenger what am I going to do I'm going to raise funds again we're looking at the challenge and we have to look at the challengers payoffs I'm an ambitious politician do I prefer to win an election by raising plans hard to my ordinary life as an intern in some other politicians office you bet I do that's what I choose all right the way that you indicate the choice taken or the choice not taken is less important than how you take this information and transfer it to the next stage the way that you do that is by replacing the decision node with what's called its strategic equivalent okay so let me just do it first and then I'll talk about the logic of it what I'm going to do is I'm going to replace this node with the payoffs okay I'm going to replace the challengers decision with eight three and three eight okay the reason why I'm doing this is because if the incumbent is strategic what the incumbent is going to think is that if I raise funds you don't like raising funds believe me I don't but if I do it it's the equivalent of getting a payoff of eight if I don't do it it's the equivalent of getting a payoff of three okay so this is what we mean by rolling back okay it's like we've just we no longer need to look at anything below the first branch here with the strategic equivalent if I am the incumbent all I need to think about is do I want to pay off of eight or do I want to pay off of free what do I want eight okay what do I do I raise funds okay so one of you guys want to summarize what the economic story is here remember I said the psychological explanation of why an incumbent would raise funds could be they just do it out of habit okay so even though they don't like do it they just incapable of stopping what well what's the alternative that we've got here yeah for fear that if the incumbent doesn't raise funds the Challenger will win that's what's your name Chiara that was Chiara put it in a nutshell and that is exactly what the game periodic political economic story yes okay let's put it here why raise funds because of how the opponent would react yeah I didn't okay why do we raise funds because of the counterfactual okay it's not because we like raising funds okay it's clear in these assumptions that holding the election outcome constant the incumbents payoff is lower whenever they have to raise funds it's lower when they win and it's lower when they lose okay so it's not that they don't like it but that they understand how their opponent would react we never see the opponent's reaction because the incumbent doesn't want them to do it okay we never observe this happened okay the reason we never observe it is because the incumbent is strategic yes what's your what's your name Alain probably more that I can remember but I might remember a few of them Soleil is wondering what is the logic of the solving backwards and that actually sets me up for another thing I want to say here another phrase that will be helpful I think it's a phrase that I put on the syllabus for this part of the course the phrase is kind of an aphorism for what it means to be strategic and the phrase is look for word reason backward ok so the logic behind solving the game from the bottom up is the logic it's a logic that I bet most of you guys have heard from your parents at one time think ahead think about the consequences if you say that to your brother how is he going to react if you do that what is going to happen ok whenever that that's part of what parents tell their children the world round it's part of what mentors tell their mentees in all sorts of situations and that is the logic of being strategic ok but the way to think about what you should do when you're deciding whether to raise funds or not is not to just look at your choice ok if we start at the top and the incumbent said well look here's my choice I can raise funds or not I don't like fundraising forget about it that wouldn't be strategic whether what the incumbent should do is should think about the long-term consequences of what they're doing what are they what are those consequences how do they like them ok so that idea thinking about all of the consequences and making your choice based on that not just based on the superficial short-run aspects of the choice that would be a feature that I would say would characterize rational decision-making in context that we wouldn't even think of a strategic what makes it a strategic situation is where part of the consequences involve anticipating what another person is going to do ok so another way to think about Elaine's question would be this kind of fits with the logic of game theory to think about what's the alternative if I don't start at the bottom I could start at the top and the way I would get wrong by starting at the top is that I wouldn't be thinking about how my income my opponent would be reacting that wouldn't be being strategic okay this phrase um look for word reason backward was popularized in a book that came out probably the late 80s it's been out for a while now um it's called thinking strategically and um it's a good book I recommend it to you it's in the category of what I would think of as sort of like a popular management book it's written I think mainly for business people people in people thinking about professional decision-making okay it's an it's meant to be advice in that sense that's not a style of book that I'm particularly enamored with I think most of them are are pretty bad but this is a pretty good book and actually one of the authors of this book is the author one of the authors of your textbook Avinash Dixit um so if you are interested in kind of some background reading on game theory how it would apply in a lot of situations that are political in one sense it's about you know power and getting people to do things and conflict being resolved but not in the sense of winning elections voting in legislatures it's more in the sense of getting along in an office that book would be uh something that I'd recommend it's not a hard read okay it's meant to be I think it's meant for people to read on airplanes when they're doing business travel seems to be that kind of book okay so um we've take in the situation translated it into a game solve the game and done a little bit of interpreting the solution so what we've done today is what I'm going to ask you to do on the homework problem that I'm going to hand out on Tuesday I haven't said all I have to say about this example so we're going to start on Tuesday by I'll put the same picture basically back on the board and talk about some of the other things we could have done differently why do we do it this way but on but that's where we are over the weekend read um you might want to read through all of chapter three ok we're not going to we're going to spend some more time on chapter three but just to get a sense of what's going on in these these games
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_10_UCLA.txt
okay so today we are going to start talking about simultaneous games up to this point all of the strategic situations we've been interested in have had a natural sequence what we're interested in the strategic interaction we've been studying the real world have been cases that evolved first one person makes a choice and then the other person reacts to that an important part of the way we studied simultaneous games was we thought about how the second mover would react to the first movers twice and then we took that even further we thought about how the first mover would anticipate the second mover reacting okay so that's sort of what we've been doing in this class in a nutshell there are lots of strategic situations where that can't happen okay either when the decisions are truly simultaneous we're making our decisions at the same time so I can't solve the tree backwards if I'm one player I can't think about how you're going to react to the choice I actually make because you're gonna make your decision at the same time as I make mine or you're gonna make your decision before I mean what you're gonna make your decision before you know what I did all right so one point that I'm going to be emphasizing a lot in this early section on simultaneous games is we talked about simultaneous games some of them are truly simultaneous but in this category of simultaneous games we're also going to put situations where the players can't observe each other's choices okay so in the Challenger incumbent fundraising situation if the Challenger and the incumbent couldn't observe what was going on or more importantly the Challenger couldn't observe whether the incumbent had raised funds or not that would change the situation okay so that's that's where we're going now there's a lot more uncertainty in these kind of strategic situations they're harder for players in the real world to deal with and and if I want to say they're a harder part of game theory or not there's some parts of simultaneous games seem more straightforward and probably more intuitive than sequential games other parts are trickier okay so I'm going to start talking about simultaneous games the way I think probably just about every professor who ever teaches simultaneous games does well by talking about the prisoner's dilemma which is by a longshot game theories greatest hit just out of curiosity how many of you guys have heard about the prisoner's dilemma in some other course a lot Wow or really a lot it's something that's really kind of bubbled out into to popular culture you know um and I think with good reason well as we talked about the prisoner's dilemma I do think it's a very fundamental parable that we can learn a lot from okay so just to refresh your memories I think many of you know the prisoner's dilemma story not all of you and many of you who do know it may not remember all the details okay so the story that goes with the prisoner's dilemma is there's two guys there are two criminals and they've been captured by the police and the police take them into separate rooms and question them separately and they the police offer them both deals separately and the deal is the police give each press the same deal you can help us you can give us evidence against your partner and we'll go easier on you or not if you don't we've got enough evidence to convict you of a lower crime right each prisoner is being given the same set of options okay so the deal that makes it strategic is that the value of the prisoners evidence to the cops depends on what the other prisoner does okay yes one prisoner gives the cops some evidence that prisoner gets off a good deal and his buddy gets a bad sentence if they both give evidence they get somewhat less punishment than they would get if just one ratted on the other if they're both quiet they get a very minor conviction okay so this is the part where in past years I've you know asked pass classes the same question I asked to you guys what happens in the prisoner's dilemma how many people have heard it before and I actually in the past ask people in the classroom to tell the prisoner's dilemma story and I'm not doing it this year because usually even very good students who've heard about it recently have trouble remembering all the details there that's the part that makes it a strategic situation it's a strategic situation because for each prisoner how much punishment you get depends not only on your choice but on the choice that the other prisoner makes okay so what I'm going to do now is I'm going to organize that information in telling you the story I told you what the strategies are each player has the chance to cooperate with the cops or not think or not think is the way it's often put the payoffs are the sentences that the prisoners yeah how much time they have to do in prison is usually the way that that thought about and the payoffs depend on both choices okay so we've got our Holy Trinity there strategies payoffs and outcomes but we're going to organize that information now differently than we did with sequential games okay with sequential games I'm just changing color here when we depicted sequential games what we used were game trees and game trees you may or may not remember I didn't really emphasize this another word for that is extensive form okay for simultaneous games we're not going to use treats we're gonna use normal form okay so normal form is not a tree but a matrix okay a matrix of payoffs so not not a matrix and the like linear algebra sense of a matrix but a table maybe a table is a better word okay so this is what I'm gonna do I'm just gonna do the example of how to set up the normal form for the prisoner's dilemma and then we'll go through it more slowly okay I'm gonna have prisoner one here choosing which row were in here's the row player okay so in sequential games we talked about our players as first mover second mover third mover if need be in normal form simultaneous games we talked about the row player and prisoner one is going to choose whether we are in the row that corresponds to let me do it in the same order that I'm gonna put it up on the the website president one is going to choose whether to confess okay or to deny the crimes and using the same words here the Dixit and ski so confess is betraying his partner okay it's giving evidence to the police denying the crime is saying no we were somewhere else you've got the wrong guys sticking to that story so prisoner one row player prisoner to guess what is the column player so prisoner two decides which column we're in are we in the column where prisoner two confesses oh and prisoner to denies the crime now what I'm going to do is I'm going to put the payoffs in the cells here and pay attention to where I'm putting the payoffs in Dixon's Keith they use color and they actually use it effectively so that the color of the row players strategies match the color of her payoffs in the matrix a more common way of keeping the payoff straight and I think it's just use useful visually it draws your eye to the right place is to put the row players payoffs down here in the bottom left corner see how I've lined up the strategies of the row player supposed to be lined up with the bottom and they're on the the left side here and we're gonna put the column players payoffs up here in the upper right okay so let me before I put them in just use green to know this is one payoff this is the row players payoff and this will be the column players okay so I need to add some numbers here to depict how they feel about the various punishments okay well what I'm gonna do is I'm gonna start with the situation that is the best from prisoner once point of view okay the best one from prisoner ones point of view you get the right one here yes where did something different than my notes yeah sorry about this guy's yes I did no that's cooperating you're not gonna do I have this great idea of making this like the book and I think I will make it like the book in a minute but right now I'm going to do this the way I would normally do it and the way you guys have probably done the strategies before okay so in your notes go ahead and leave the confess and deny stuff here go ahead and leave it because it will help you look at the book but what I'm going to do is I am going to put in the words that I am used to here because otherwise I'm going to be saying it backwards and I don't want to mess you up doing that okay so denying the crime is cooperating what Nick what makes this an issue and what makes what I think made the book do it a little bit differently is it doesn't mean cooperate with the police okay it means cooperate with the partner okay and confessing here we're gonna use the word defect okay and we're gonna use the same ones up here okay confessing is defecting denying is cooperating okay if you've heard the prisoner's dilemma before you've probably heard the defect defect um pay defect cooperate defect cooperate on names for the strategies okay so the best thing where I left off where I was getting the payoffs muddled around in my own head here the best thing for prisoner one is for me to defect and for my partner to cooperate for me to tell the cops what they want to hear and for my partner to stick to our original agreement for me to rat on him and for him to remain true to me that's the best thing if that's what happens I get to go free and he gets a long sentence okay I get to take advantage of the deal that the cops gave me he didn't do that the cops used the evidence I gave them against him okay so the best payoff for me in this case is gonna be zero in terms of like the sentence I get I get to go free okay it's the worst payoff for the person who cooperated okay it's the worst payoff for the person who did not betray the other player now if the roles are reversed the payoffs are going to be reversed okay yes my partner I'm still the road player here my partner defect that he confesses the crime to the cops he gets the really high payoff excuse me he doesn't get the really high payoff he gets zero pay up and I get the really low huh so here I cooperated with him I remain true to our agreement he didn't I get what sometimes called the sucker payoff okay now there's these other two cells where we do the same thing okay so one thing that can happen is we can both confess to the cops okay and they in both cases use the evidence we gave against the other one they use my evidence against you and your evidence against me but they also take into account the fact that we provided evidence so we get lower sentences than we would get if we hadn't betrayed our partners all right so we get payoffs of negative two and two here on the other hand if we both stick to our original agreement okay if we both deny any involvement neither of us gives any evidence to the cops we get very minor sentences they pick us up for running a red light something like that but that's they can't really convict us of the terrible crimes we committed okay so that's what setup they are now the interesting thing about the prisoner's dilemma is let's look at it from the real players point of view we're in separate rooms us prisoners I don't know what player 2 is gonna do I'm gonna have to make my choice before I know what he's done and I know that he's gonna make his choice before he knows what I've done so there's no roll back here no sequence that can help us solve a treaty backwards okay but in this case I don't need to okay if I'm a real player this is the way I can think of it okay either we're gonna be in this column or going to be in that column I don't know okay either prisoner 2 is going to defect or prisoner 2 is going to cooperate let's say the prisoner two defects okay so let's say that my buddy the guy whose choice I don't know and won't know is to be in this column if he makes the choice to defect I would rather defect right I'd rather have negative two than plot okay so if we're in this column I'd rather have the top row the other thing that could happen is the player two choose to cooperate we can be in this column if we're in this column guess what I'd still rather defect okay so from the role players point of view I don't actually need to know what the other player is going to do if he defects I'm better off defecting if he cooperates I'm better off defecting okay no matter what the other player does I should choose to fact because the fact is we're gonna call a dominant strategy okay a dominant strategy is one that gives me a higher payoff no matter what the other player does okay so I'm just gonna write down that definition XI I'm gonna write it in the verb form about dominated okay I'm going to say strategy a is dominated by strategy B yes payoffs are higher from no matter what the other player does so if strategy a is dominated by strategy B strategy B is a dominant strategy okay always that would be true in the case where we just have two strategies here okay when there are dominant strategies it's not hard to solve simultaneous games okay it means you don't know you'll need to know what the other player is doing okay whatever he does you're gonna make the same choice okay so defect is a dominant strategy for the row player just saying it the other way defect dominates cooperate would be the verb way of saying it or you could say it backwards you could say cooperate is dominated by okay but this dominance relationship implies that one is better no matter what the other person does okay so player the row player has a dominant strategy so does the column player okay that's gotta be true I set up the payoff so they were perfectly symmetric but let's just do the comparison here okay from the column players point of view I'm the column player I get to choose which column friend a more accurate way to say it would be my choices determine what column we're in yeah so I asked myself suppose that we're in this top row is my payoff better from defecting or cooperating okay I can't choose the rope on the column player I just get to choose given what row we're in whether we're in this column or that column well from the column players point of view yes I'd rather have negative two than five if we're in the bottom row I'd rather have zero they're negative one so I also have so the fact is a dominant strategy for both players when players have dominant strategies it's pretty hard to argue that they wouldn't play them okay that's why we're we're starting with the prisoner's dilemma another way to think about that is there's a very strong incentive for both players to defect yeah okay so if we think about what the strategic incentives are the strategic incentives are going to predict that both players will defect and that this is the cell that will end up that so right here given the defect is a dominant strategy for both players the predicted outcome is defect defect and now some conventions with how we represent games as normal forms okay one convention let me emphasize again use this I strongly recommend that you use this convention of putting the payoffs in the corners okay sometimes what you will see is people doing something like this I'll do it with this one negative one negative one in the middle it's alright it's not wrong okay when people use this convention similarly to here and here what you want to assume is that the role players payoff is first okay so there's you need to know that okay the role players payoff is first what I like about this alternative way of writing it is that it's easy to remember that this payoff goes with the role player and that payoff goes with the column player okay it's one less thing that you have to remember if you set it up so that the payoffs line up with the strategies of the row player here and the strategies of the column player yeah okay so make it easy on yourself put the row players strategies at the bottom of of the rows and the payoffs in the bottom left corner line up the column players strategies with the right side of each cell and put the column players payoffs in the upper right okay so that will help when that's not there to help you just remember that the convention here when we're talking about what we think they're going to do in there what the payoffs is to always put the row players whatever their strategy their payoff I guess those are really the two things the row player comes first okay well the reason why the prisoner's dilemma is so famous it's not just because it's an easy simultaneous game to solve it's famous because it's got this paradoxical aspect to it the paradox is the players in this game as they were in sequential games are trying to maximize their payoffs okay and we predict that they're gonna end up in the cell precisely because they're trying to maximize the payoffs but what happens when they both try to maximize their payoffs they get lower payoffs than they could both have okay so the thing that is interesting the thing that attracts people's attention to the prisoner's dilemma is that this predicted outcome is Pareto inefficient and it's pretty doll inefficient in a pretty stark way it's not just the one player could do better it's that they could both do better okay if they would both just stick to the promise they'd made when they decided to be partners in crime they could have had a higher payoff and they don't do it okay and the logic that gets them not to do it is so strong so one way to think about prisoners dilemmas and why social scientists get so obsessed with them is to contrast the prisoner's dilemma with Adam Smith's idea of how markets work okay you guys would probably know a little bit about Adam Smith he coined the phrase the invisible hand okay and his idea is that if in a market if everything's working well if people are able to choose among different sellers for the things they buy and able to choose among different employers for the services they sell that the individual incentives we'll be good for the group ok so Adam Smith you know famous phrase is that people will be led by some invisible hand to do the choice that's good for society okay if I'm producing something I'm bringing my vegetables to market and they're full of bugs and you guys don't like to eat bugs you'll stop buying my vegetables and I'm going to get this incentive to find better soil use pesticides wash the bugs before I bring them to market whatever that my my individual incentive the fact that I want to make a profit will lead me to do something that will be good for the whole group you guys will get the kind of vegetables you want okay so there's this idea that goes back to the late 18th century about sometimes individual incentives can actually be good for the whole group and sometimes that really does happen okay I do not want you to think that the message of game theory is that invisible hands never work but a very important message that I think the prisoner's dilemma really point out is that invisible hands don't always work this is the opposite of the invisible hand argument this is a very stark case where if the players make the choice that is best for them they're going to end up with something that is actually worse okay if they make the choice each makes the choice best for them as an individual they'll end up at a collective outcome that is worse for both of them worse for them as individuals okay so there's there's the paradox all right so now I want to go back to this business where I had trouble labeling the strategies here and I want to UM let's see I guess I'm going to open that can of worms again yes I think so anything I want to say is I want to show you what I was trying to accomplish the first time and see see if it helps you as I said cooperate into fact are the words that people use in the prisoner's dilemma context ubiquitously if you've the most likely place that you would have encountered the prisoner's dilemma would be in an international relations course it's huge part of international relations analysis and they generally use these words Dixit's Keith use confess and deny okay and I have never liked that choice of words I haven't liked it because not only is it not standard I think they're being a little bit deliberately tricky okay they are picking a word that does indeed fit the story okay confess does that the specific prisoner's dilemma story um as does deny but they picked it so that we have the same first letters and they're reversed okay now addiction skis are doing that for a reason they're what are you guys to stay on your toes they're wanting you to not get used to always thinking that the word that starts with D is the socially bad behavior in this story can be good and the word that starts with C can be the socially good behavior um as you see it even messed me up when I have those words up there and I went to put the payoffs where they went I had my brain freeze so I would recommend as you are working through the early sections of chapter four stay aware of these words okay and also just it's not the distance cues are wrong use non-standard words they're being ambitious for the you guys they're wanting you to be nimble but don't let it trip you up okay all right so what I want to do now is I want to say a little bit more about what makes a game prisoner's dilemma and yeah and then then go into other games okay well I'm you're racing a you can ask questions um if you don't have questions let me ask you a question Oh what do you think about that prediction I think that's what the prisoners would really do yeah yeah Yeah right you did dance not what they would do why okay so Nathaniel is that right Nathaniel says no in real life they wouldn't do that they cooperate because um eighth brother's gonna kill be for for doing that and that's I think that's right actually okay um what if either I had a brother and not only that what if neither a nor B had any friends Daniel says when they got out they kill be there um you're both pushing forty and you know even really ferocious guys get a little less fierce as they get older when what I'm doing is I kinda I I think what you're picking up here is exactly right what we're gonna the last thing we're going to do in this course is we're going to talk about what's called the shadow of the future okay when people look at prisoners dilemmas and they think gosh is this really happening the I think most compelling reason why we think that would really happen is that the prisoner's dilemma game doesn't seem to capture everything that's going on in a lot of situations okay that in general the payoffs here might not be everything the players care about okay so one way that in game theory we think that prisoner's dilemma leagues get solved in the sense that we get to the good outcome is if we start to bring in these future considerations I think that's not just game theory okay I think that's actually a very important thing to recognize about real life what keeps people on the straight and narrow what keeps them cooperating with each other even when they have in a strong incentive to defect it was a very strong incentive there is this idea that something in the future matters okay that they will be punished in the future that there to be punished in the sense of you know bees gonna come and beat me up or maybe enough unless start but still a possible way I'll get a bad reputation that all of these kinds of things this aspects of society are really really important okay so that's one place where we're going here that's that I do think there are lots of prisoners dilemmas that don't get solved um and maybe I'll get to one right now okay I think I will just get to one rather than say anymore yeah here's one right here this is one where there could be shadow of the future considerations okay but we're gonna we're not going to think about them right now we're just gonna think right now about the roommates dilemma okay so these are two roommates and here are their choices okay we can i'm roommate a i'm the row player i can leave a mass or i can clean off okay my dear roommate column lady can also leave a mass or clean up okay i as the row player what i really like is when my sweet roommate bless her heart cleans up okay what i really like is when i can leave a mess okay cuz i'm a very busy person I have studying to do I have phone calls to make I have better things to do but I sure do appreciate all the time you spent cleaning up our room okay I love that when I'm the real player I get this nice big payoff of five callin players kind of in a snit about that she's not very happy with that outcome and guess what yes I go to all the trouble of cleaning up I'm still the row player and she just leaves a mess there I'm why is she in such a good mood okay what else can happen okay we can both clean up and that's pretty good okay we have a nice clean apartment oh we could both leave a mess and I don't like that so much although I could tolerate a mess better than I could tolerate that really really annoyed feeling I get when I've cleaned up and my roommate has just left a mess this is still a prisoner's dilemma how do we know it's a prisoner's dilemma it's not about prisoners I don't have cooperate into fact what makes this a prisoner's dilemma it's two things what makes this PT prisoner's dilemma that's people will often abbreviate it that way one both players have a dominant strategy both and up worse off when both play dominant strategy es dominant strategy then did the opposite games with these features are prisoners dilemmas okay the dominant strategy is the strong strong incentive okay and when we both follow that strong incentive we end up worse off and if we both done the other thing so let's just double-check I'm mapping this into the prisoner's dilemma by saying cleaning up would be the equivalent of cooperating cleaning up is doing the thing that is good for the group leaving the mass that's defecting okay it's defecting a dominant strategy yes it is okay if my roommate leaves a mess I'd rather leave a mess okay because I find it so annoying to have done all of that work and she didn't help at all if my roommate cleans up I'd rather leave a mess too because I have other things I'd like to do my wife okay so no matter which column we're in row players payoff is higher from the mess strategy same story from the column players point of view no matter what I do if I leave a mess she'd rather leave a mess okay because she doesn't want to be the sucker who does all the cleaning if I clean up she'd rather leave a mess because departments already clean nothing left for me to do okay so we end up in this so we end up with this messy apartment even though we both love to be here okay we both really feel better if there were clean dishes and not so many roaches running around the kitchen so prisoners aside I think I hope most of you aren't facing this kind of living situation probably at some time in your life you have or will I've had this kind of situation that's kind of roommate and let me say I've participated in this particular outcome and it's not that I'm like by nature a slob but on the other hand um I do like to do other things more than clean and I don't like to be the one taken for a sucker okay so this is another case of a prisoner's dilemma here so now what I want to do is I want to start changing the game we're gonna go from a prisoner's dilemma where both players have a dominant strategy and we're gonna over the course of the week make more and more changes okay another I'll get you a question in just a sec so when we think about prisoners dilemmas and we think this is a really inferior outcome okay one way of getting out of the outcome is to make future considerations matter more okay otherwise I'm getting out of the outcome are just to change the scenario okay so we might ask ourselves if players preferences were a little bit different would we be out of the prisoner's dilemma and we're gonna sort of see what difference does make a difference or not yeah there's a way of drawing it in the sequential format it's not very helpful yep what um so this is like so not on the test not on the homework it's not really done really I don't want to be mysterious about it you'll see that it's it's easy to draw it's just not very easy to look at if it's clean not okay and then player be queen no well you'll see are these - circles around nodes okay so now you guys think there's some and dixit and scheme okay what this is used is to denote that B does not know which node she's at okay so what we'd be saying is that yes eggs made a choice to clean or not but I don't know what it is so I can't look at the payoff here and see if it's different than the payoff that okay I have to make the same choice at these nodes it's there are a few cases where this is helpful but it's mostly not okay that's we also may or may not depending on how time goes I may do a way of representing sequential games in this form - okay so you can it is it's a different thing of how do you draw it versus what you're trying to represent but almost all of the time simultaneous games this is the right way to do it or them it's the easiest most helpful way to do it sequential games the trees are yeah correct pardon e Kyra asks what if there's more than two players it works best when there's two players what you can do when there are three players and you can if you really want to go to town you could do it with some more than three as you have you draw two matrices side by side okay so the way you talk about it would be a chooses row B chooses column C it chooses what they usually call it as panel okay and I have seen occasionally people even go to four so you have like row and column within the piano's row and column within a big picture past four people usually don't draw the pictures at all okay past four we usually start just kind of writing things as equations okay but the going from two to three isn't bad isn't that hard another sort of related question that you guys might be wondering about that's easier is what if there's more than two strategies well that's easier okay we just kind of add however many strategies the one player has we have rows however 'men strategies the column player has we had columns all right so what I want to do now is change one payoff in the game all right one person's payoffs and what I'm gonna do is I'm gonna change the column players payoffs I'm gonna make this too so how is this new column player different than the old what's just kind of the interpretation you'll ain't speak louder no longer does the column player have a dominant strategy okay so before we had up here why the other game was a prisoner's dilemma why is this not a prisoner's dilemma okay B does not have a dominant strategy okay now depending on what row we're in B's choice changes if we're in the top row B just as before prefers to leave a mess then to clean but we're in the bottom row now B actually prefers to clean okay he does not have a dominant strategy okay so now um what do we think about this maybe B is someone who has a sense of shame that if my roommate has cleaned up I feel embarrassed if I haven't cleaned up maybe B is just a neater person here but the game is no longer a prisoner's dilemma strictly speaking okay what do we think is going to happen I'll just sort of talk you through the reasoning here a still has a dominant strategy okay no matter what B does a still prefers to leave a mess okay so we are going to be in this row there is no way that things can work out that would make a preferred to clean up a regardless of what B does is going to be happier with the mess okay now B I may be more desirable roommate I would say by changing her one preference he but it's not going to change the outcome in the game yes it's true that if a dude clean up B would prefer to do it too but guess what it's not gonna clean up okay we're not going to end up here a will play the dominant strategy and even though B doesn't have a dominant strategy B will play the best response to a dominant strategy so what's beginning - he's gonna leave a mess okay the reason why he's gonna leave a mass is that D be lose with it he knows what she's like she's not gonna clean up no matter what okay B knows her preferences okay if a is gonna leave a mess B prefers to leave a mess if we were in this situation where a would clean up then B would prefer to okay that's not gonna happen in this game okay so why I'm doing outline here is I'm using this idea of dominant strategies to solve games okay we're not going to be able to use dominant strategies to solve all games but looking for dominant strategies is the quickest way to solve most simultaneous games okay what I'm going to be doing over the course of this week is showing you more and more ways to solve games okay Lily Oh Lillian's wondering why would this be a simultaneous game how could they not you know they've been the same apartment how could they not see what the other one is doing um what's this it's a good question actually most of the time I think that these things do happen in sequence you know I leave my dishes in the sink and you respond to that and I respond to this and so if the players were seeing each other's messes then we could really set it up as a tree a way to think about it that might make it a simultaneous game though two ways that I could think of one is we're thinking of this kind of not on a hour-by-hour basis but like kind of on a week-by-week basis and we're all planning our calendars how much time we're gonna spend studying how much time for other stuff and when I make my decision about am I going to leave some time for cleaning up in my life I don't know what your decision is and vice versa okay another way to think about it would be yes in real life roommates do these things sequentially but who does get to go first and this is one place where the difference between sequential and simultaneous games becomes very blurry some situations it's really clear who gets to go first but if it becomes who chooses to go first that's a simultaneous decision so that's um that's actually a hard question it's something that we're going to be I think I'm gonna be I hope showing you how to cope with mainly through examples there are some cases some strategic situations where it's really really clear simultaneous or sequential there are also a very large number where it's not so clear okay and so what do we do then really an abusing judgment there's a wide gray area where I can't give you a checklist of oh if these things are mad it's sequential of these things it's simultaneous you end up using your judgement when you think about how important the sequence is in the situation the other thing and they're really the most important thing the thing I really want you to learn in game theory is we stay aware of the fact that we're making an assumption about that okay so like that's great that halfway through the course you knew to ask that question because um we want to always be asking well how much of this result does depend on the assumption that it's simultaneous okay so when we are so once we're past the step that you were asking about and we've decided to set it up as a simultaneous game the very first thing you do when you see a simultaneous game and you want to know what's going to happen is look for those dominant strategies if both players have dominant strategies that is an easy game to solve easy in the sense of yeah you could just kind of look at it but also if you had a sense to accept and to say yeah this is really what they would do because the incentive is so so clear when you have a dominant strategy that that's what you're going to do okay next possibility the other possibility we considered so far is that if only one player has a dominant strategy it's still not too hard okay you could still just kind of do this common sense reasoning that the player who has the dominant strategy will play it and the other player will play the best response to it okay well I think on Thursday I'm going to sort of finish up talking about this stuff on Thursday we'll talk about bigger games where there's no dominant strategy but there are dominated ones and that's going to allow us to solve a few more sets of games okay but by the end of the week we're still gonna be left with games that aren't dominant solvable okay so this is one of the reasons why simultaneous games are a more complicated thing to work within sequential game sequential games it was just all roll back all the time okay that worked for every game what we will be getting to next week the idea of Nash equilibrium is sort of a broad set of solutions that the dominant strategy thing will be part of but well it'll just be brought up yeah did you go that question without Nash equilibrium yeah I think I think I think I'm gonna wait I'm doing this minute do we want to talk about that now actually you know I think we do what I think I think I do want to get the concept out there and let you guys start thinking about it because we have actually seen the concept under the name rollback the idea of Nash equilibrium is this idea that let's put some words up here Nash equilibrium has some slogans I guess that go with it the outcome is a Nash equilibrium if the players are playing best responses to each other okay if given what you did what I did was a best response and given what I did your choice was the best response okay Nash equilibrium is a behavior pattern that's self reinforcing okay the fact that you're leaving the nest reinforces the fact that I was right to leave the mess - and vice versa okay the other slogan that goes with it is no regrets okay when we're in equilibrium nobody has any regrets given what I did you don't regret what you've done and given what you did I don't regret what I've done okay so going back to this game that's not quite a prisoner's dilemma anymore the outcome we're getting to is indeed then the equilibrium this is the way I'll be abbreviating it because given what the column player chose the real player doesn't have any regrets and given what the ROA player chose the column player doesn't have any regrets in lots of these situations we might both regret that we couldn't get here really would be nice to have the clean apartment even if we were both working a little bit harder for it but given one player's choice the other player doesn't have regrets so the Nash equilibrium idea breaks up this collective outcome into the choices of the individuals okay and so my choice I'm the column player now I'm the one that that does kind of like the clean apartment and would participate in cleaning if the other one does if I'm the column player I don't regret the fact that I left the mess here okay now if I had a different roommate and we could get to here that would be better for me but given the roommate I have with the preferences she has she's gonna play her dominant strategy another way to think about a dominant strategy here is that a dominant strategy is hmm a best response a dominant strategy is a best response to all of the other strategies do I have a voice now it's really gone all right other questions yes coming back it's coming back you can ask questions my what I have a seizure here or anything okay laughing yeah this was something I skipped earlier I think it is picture worth while to show you yes we're gonna be looking at lots and lots of games drawing this picture lots and lots of times lots and lots of games that have a defect like strategy and to cooperate like strategy and what makes a game of prisoner's dilemma is not that the strategies are called defect and cooperation okay throughout the next couple of weeks we're going to be looking at defect and cooperate type situations that won't be prisoners dilemmas what makes it a prisoner's dilemma is the way the payoffs work out okay so let me do the payoffs a way that might have seen before you may see again that will give you some mnemonics for the relationship among the payoffs okay so what I'm going to do is I'm going to abbreviate with letters here the row players payoffs I'm going to put all variables here s ar-ar-ar okay so the little bars here are just telling me the payoff belongs to the row player it's just emphasizing that that's the row players payoff and here we've got column players P column player gets s column player gets our column player t what are these letters hmm okay the letters are T for temptation okay that's the highest payoff and both players get T when they defect and the other player cooperates okay T is this problem T is really what makes the prisoner's dilemma situation so frustrating when you're in it there's the temptation to do something that is bad for your partner okay the next highest is our R is called the reward payoff okay the reward pair payoff would happen if both players made the choice that was in their collective interest okay collective interest okay they both cooperate we get to this we're both rewarded for it the problem the prisoners dilemmas we just don't think that we can get here but if we could we'd get our reward of our the next one is the punishment payoff the punishment payoff is what happens when we lost effect this is what we think is going to happen in the prisoner's dilemma without the shadow of the future or anything like that when they both rat on each other the punishment is the punishment they get okay the cops used the evidence that each of them gave against them punishment for selfish choices all right and s is the sucker payoff time s is the payoff that one player gets when they do the thing that is in the good of the group and the other player doesn't okay s player is yes I didn't give any evidence to the cops you betrayed me now I'm in prison and I wish I had a brother to go shoot you down or I clean the apartment I picked up your dirty clothes they were so gross and you went out and I had five okay so what makes a game a prisoner's dilemma is if the payoffs have this relationship okay the temptation payoff is the biggest I'm going to do I hear because it's true for both the row player and the column player greater than the reward which is greater than the punishment which is greater than the sucker payoff okay I both row and column okay so games that have that form our prisoners dilemmas we have this relationship among the variables then the strategy the corresponds to the payoffs P and T is always going to dominate the strategy that corresponds to the payoffs SNR this relationship holds the way the payoffs are put in the box the strategy that leads to those payoffs will be the dominant strategy being a little bit loosey-goosey here because these are payoffs they're not really choices but it's what I'm saying the dominant strategy is the one that will produce this pair of payoffs okay and it's a dominant strategy precisely because T is better than R that's this part of it P is better than s and P is worse than all okay so the P being worse than R is the part where if we both play our dominant strategy we end up at a worse choice we could above had R we could have had that reward we blew it we didn't get it we just have P okay so okay so that's that's gonna be it for today we'll start with dominated strategies which is a little more complicated in games with more than two options no homework this week you will get a homework a week from today
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_12_UCLA.txt
okay I've got a homework to send around ah assignment number four you can do you can do question one right now you can sit and lecture and do it if you want although I don't think I'd recommend it given what we're gonna cover in lecture and I think by the end of lecture you'll probably be able to do question two certainly by the end of Thursday so start okay um two preliminary things before we get started we're gonna get started on mixed strategy Nash equilibria today one announcement is that I'm gonna have to leave campus right after class today so I will not be having my normal office hours my office hours this week are just going to be Thursday and I'll make them a little bit longer to make one sorry not one 133 Thursday today the other thing I wanted to do just briefly at the beginning is several people have been asking about the distribution of midterm scores so I'll put the distribution up on the board here I see a little bit about that say a little bit about how the midterm grade relates to your final course grade or how it's worked out in the past and I have a feeling it'll work out the same way this year okay so the distribution of the midterm six people got perfect scores on it including those six people 70 and above there were 44 tests in that category so a large number of people doing very very well on the exam scores of 60 to 69 there were 40 exams in that category 50 to 59 28 here 40 to 49 21 exams there 31:39 11 people here and below there were seven exams they should add up to a total of 151 unless I'm missing an exam or two the mean score was slightly over 58 the median was 62 okay so this is a strange distribution of midterm scores overall midterms of score distributions don't look like this for most class classes but every year it does look like this for PS 30 the things that I think are unusual about this distribution is that there is a cluster of very high scores here a very large number of very good exams and then there's what I would call a very very fat tail there's more very low scores than you normally see in a political science midterm and I'm drawing your attention to that because this part of the distribution doesn't show up in the final okay the final exam scores are much more normal what happens for the final exam is that a large number of people in the low categories some of you guys were sitting out there right now and I'm sure you're not feeling very happy about your midterm scores in past years a number of you have really improved dramatically okay so one thing that I think is unusual about PS 30 is that we do tend to see a lot of improvement between the midterm and the final and I think there's swelling there's three reasons for that one is the reason that I mentioned before the midterm there was a lot of time pressure okay even though most people did finish by the time time was called most people seem to be just barely finished and I'm sure some of you felt rushed there will be less time pressure on the final okay I just been through the three hours that you'll have you'll be able to go at a slower pace have more time to check your work I think you'll find that very helpful second reason why there's usually such a big improvement from the midterm to the final is I'm guessing for many of you you're not used to having that kind of exam okay that it's not like a typical poly SCI exam you're not writing essays you're not demonstrating that you've mastered a large amount of material that you've been given to read which is something that is more typical in a political science class okay so if this was the first time you took an exam like this where you have this task of translating a problem into an analytic framework analyzing it and then interpreting the answer the first time is hard and I bet you learned a lot from the midterm okay so you're just gonna be better at this kind of test on the final exam the third thing is you're gonna know a lot more game theory by the end of it and even though the topics that we're covering after the midterm are different there's a lot of reinforcement it's like you're gonna feel more on more comfortable with it so again let me emphasize in the past people in this range have done much much better on the final exam I should also say that in the past I hope this doesn't happen this year but every year in the past I've been a couple of people who really nailed the midterm and I think have gotten the wrong signal from that a couple of there's always a couple people who do really well in the midterm and surprisingly poorly on the final I've never actually known what's going on in those cases I strongly suspect that there might have been some false confidence from the midterm okay now I'm not taking back what I said a minute ago you guys are going to know a lot more game theory by the time the exam comes but let me say something else on the other side the hardest game theory is the stuff after the midterm okay that's sort of yeah it's a natural accumulative subject and in particular the topic that we're starting this week mixed strategies is a topic that most people find weird it's not a this week would be a particularly bad one to coast okay one other question that came up in the context of asking about the midterm distribution is people asked me would there be a curve for the test I don't really curve the exam I mean the the numeric score goes into your final point score for the course so the more relevant question is how do I decide letter grades from the course and so I'll tell you how I do that what I do is I look at what the distribution of grades would be with what I think it was the standard curve 97.5 and above is an a 92.5 is it a plus 92.5 and above is an A it's at a rather kind of the standard non curve that that assignment of letter grades and I also look at what a curve would be 30% of the top 30% getting any or something like that and I choose the distribution that would advantage you've got this most years there's not a lot of difference okay the last few years the standard cut points 97.5 92.5 90-87 all of those have given about the same as what you would do with a relative curve I've given about 30 percent A's 30 percent B's okay so I what I would do is if I don't think there's enough A's based on the curve I'll take it down I'll take the cut points down so that basically I would adjust for what I would perceive to be an unusually hard final exam the curve won't hurt you know if I mess up and give you too easy in a final exam and forty percent of you get 95 and above you guys will all get A's okay so that's to get to that happy outcome where large fractions of you are getting very high scores like to have that happen let's take a look at game okay so we're gonna start a new simultaneous game today the game we're gonna start is cops and robbers okay so it's a to strategy two-player game just like the ones that we were analyzing last week and we're gonna let the cops be the role player the robbers be the column player okay um this game's gonna be a little bit different than most of the ones we were looking at last week of Muslims we're looking at last week the players strategies and payoffs were symmetric okay they both have the same choices to make here choices are different okay the cops choices are it could be on the beat driving around looking for criminals doing their thing or they can be in the donut shop sitting on the bench flirting with the waitress eating some donuts the robbers can be at work they could be out there robbing people or they can be at home some similarity to their strategies but not exactly the same thing and the payoffs are not going to be symmetric okay from the cops point of view I'm gonna do all the cops payoff so then I'll do all the robbers payoffs and the cops point of view this is the best outcome okay the robbers are at home they're not creating any trouble there is no trouble I'm in the donut shop it's comfortable it's warm I like that that gives me a payoff the worst thing though is I'm in the donut shop the robbers are at work there's a lot of crime on my beat and boy I'm in trouble with my supervisor in between that yes I'm on the beat and the robbers are at work well you know I became a cop because I like catching bad guys that gives me a pretty good payoff justice is being done I'm apprehending those robbers while they're committing the crime so I'm getting a payoff of two I'm on the beat and the robbers are not active okay they're their home there's no crime going on it's kind of boring I'm tired of my partner we're driving around okay so that's not a very good payoff for me but it's still much better than for me to be slacking off eating doughnuts well there's crime going on on my watch so those represent the cops from the robbers point of view I'm a robber I want to rob people I want to get their stuff I want to make some money out of it and I don't want to have to deal with the cops the robbers like this scenario cops are in the doughnut shop the robbers are out there committing crimes they're getting a high payoff the worst thing from my point of view if I'm the robber is I go out trying to commit my crimes and the cops catch me oh boy not bad bad pay off for me yes I'm at home this is something else that I haven't done I just wanna show you you can do this I'm at home and I'm the robber I don't care what the cops do they can be on the beat they can get the doughnut shop it doesn't affect my payoff at all this kind of strategic situation happens sometimes where one players payoffs are affected by the other payers decision in some cases but not in others okay that doesn't affect how we set up the game at all it's just a normal case we just represent it by having the payoffs be the same okay so what's the interesting thing about this if we use the processes that we went through last week first thing we do is we look to see if either player has a dominant strategy okay well the robbers obviously don't have a dominant strategy depending on what the cops do going out and being at work is either much worse for them or much better for them so no dominant strategy they are and the same for the cops okay yes the robbers are at work the cops would rather be on the beat if the robbers are at home the cops would rather be in the donut shop so thing one no dominant strategy for either player and because this is just a to strategy game no dominated strategies either so any of the iterated dominance things are not gonna work okay so the next step but I taught you last week to go through is to just go through and look at each cell and ask yourself in each of the possible cells is there seller neither player would have regrets okay well if we're in this cell okay the robbers are at work and the cops are on the beat does either player have regrets yeah who has regrets in this cell robbers all right given that the cops are on the beat the robbers wish they'd stayed home okay I don't want to be taken away in the paddy wagon all right so this is not Nash equilibrium in order for it to be a Nash equilibrium both players have to have the highest payoff they can get given the other player's choice so the column players payoff has to be the highest given the row were in and the row players payoff has to be the highest give them the column okay either player has regrets it's not a Nash equilibrium so maybe this is okay so who's got regrets yeah cops robbers are at home I could have been in the donut shop hey cops have regrets yeah here cops are in the donut shop the robbers are at home you know the robbers need to make a living they could be out there breaking into cars stealing things they could do that and finally we get to this cell this in this cell if the robbers are working the cops room the donut shop again the cops okay so that's bringing us to point a in the outline and you'll notice I have a question mark here okay it looks like there's no Nash equilibrium in the sense of the Nash equilibria we were looking at last week okay what we're gonna do this we because we're gonna say there is no Nash equilibrium and pure strategies hey what does that mean no Nash equilibrium where both players clearly do the same thing no Nash equilibrium in which players make certain predictable choices that by itself is worth knowing okay it's worth knowing in this game that it is a mistake to be predictable okay when you find a game that has no Nash equilibrium in pure strategies that's what it's telling you okay that if I have a player that if the players play predictably they're going to have regrets okay in any of the predictable situations one of the players is going to regret what they're doing okay so that's moving us toward the idea of mixed strategies okay so let's just say what a mixed strategy yes a mixed strategy is a probability too many syllables in that word each choice is made the probability is between zero and one it's not certain okay I'm playing a mixed strategy I am randomizing okay so you'll hear it put that way I am making a random choice I'm rolling dice okay I'm playing that way that we thought about nature playing in the sequential games part of the course and the rolling dice is one way one concrete way to think about randomizing but a way that is better that I'm going to encourage you to think concretely about actually I had a prop I wanted to bring in today but I didn't draw it on the board the way to think about choosing at random is to think about spinning a spinner okay like if you were when you were a kid if you play to get my twister or something like that when you spun the needle okay and if it landed on the you know elbow and a yellow square you had to put your elbow on a yellow square something like that what you're doing is you're spinning a spinner and this is a more useful way to think of it because the spinners on a circle and what fraction of the circle is shaded different colors it's going to tell you the probability you're on the beat or you're in the donut shop or you're at work or at home okay the spinner analogy is a better one because it allows us to think about all possible probabilities okay so this would be a 50/50 probability something like this would be 25% probability of one strategy a 75% probability of another thinking about the spinner being colored different ways allows us to think of all the possible probabilities the cops could assign to being on the beat the robbers could assign to being a so again I'm kind of I'm hoping I'm that I'm doing one step back we're two step forwards doing some repeating but also moving the idea forward here when we don't have a Nash equilibrium in pure strategies that's telling us something about equilibrium behavior equilibrium behavior cannot be predictable if your behavior is predictable you will have regrets okay the other thing about and what we're going to be calculating we find the mixed strategy equilibrium is that in order for both players to be in equilibrium in order preneur the player to regret making a choice they care about by spinning a spinner in order for that to happen the probabilities have to be specific values not just any probability will work and the probabilities are gonna depend on the payoffs in the game given the payoffs in a game if you're giving too much weight too much probability to one strategy you might not be in equilibrium okay let me step out of the basic narrative for a minute and make a comment on this this is one of the reasons why mixed strategy Nash equilibrium I said the beginning of the class is kind of a hard topic it's not very intuitive so what we've gone over the course of the last six weeks we've gone from the roll back process which I think most of you guys found sort of sensible that yeah that if you really care about the long term consequences you look ahead your reason backwards that sounds like a process that somebody would go through in their head when they're making a strategic decision last week already you may have felt that since starting to blur a little bit when we were talking about the multiple equilibria okay battle of the sexes these assurance games where you can be in equilibrium okay we could both be at the the good equilibrium in the assurance game we can both be in the equilibrium where we clean the apartment but there's that other equilibrium when we both don't clean too and how do you know and how do you try to figure it out it's less clear there okay so simultaneous games are less helpful in teaching us what to do in a specific situation they can teach us some things not to do but if you want to know what to do in a specific strategic situation that has multiple equilibria you have to know something about what equilibrium is more focal okay after knows something that game theory can't tell us now we're getting even further away from something that can be applied okay the applied message of games without pure strategy Nash equilibria is that you don't want to be predictable and I am throughout this week and next week anise pin out some lessons from that about how to be unpredictable okay but our analysis of mixed-strategy equilibria is taking us as far as we're going to get from a process that goes on in people's heads okay so I'm I'm saying is I don't really think people are spinning these spinners okay in fact you'll even see that they can't really gain by doing so so mixed-strategy equilibria is something that helps us understand situations it's helpful for understanding how people make the choices that they do it's less helpful for telling you what to do in a specific situation okay that's telling you what not to do don't be predictable okay all right so now back into the main narrative what I'm going to do is I'm going to go through the process of finding the mixed strategy Nash equilibrium and I'm going to be doing more than one example of this over the course of the week but during my first example I'm going to really try to give you the why of the explanation as well as just the cookbook side of it okay so I'm going to try and show you some steps to go through but I hope I'm gonna convey the reason why and while I'm doing this I'm gonna elaborate on the first point outline there in order for either player to be okay with this idea of making a choice by spinning a spinner making a choice completely at random the only way they are going to be willing to do that is if they are indifferent between either pure strategy I'm the cops the only way I am willing to make a random choice between being on be and being in the donut shop is if the payoff I get from being on the beat and being in the donut shop is exactly the same if what I expect to get from being on the beat is different what my path my expected payoff is is different than what my payoff is in the donut shop I should go with the cure strategy that gives me the higher payoff right if what I expect to get from being on the beat balancing the probability that the robbers are gonna be at work or at home that's why it's an expected payoff yeah expected utility if it's different I should go with my true strategy the only way I'm willing to spin a spinner is if what I get from making either choice is exactly the same okay that is the key insight to finding mixed strategy equilibrium that I won't get a different color here both players must be indifferent between their pure strategies and you're thinking well how can I be indifferent between two negative five one and five they're different numbers well you're the cops you only get to choose a combination of two and one and negative five and negative five the way you could be indifferent is there could be a probability of the robbers being in this column of that column that would make the expected values the same that's where we're going to go okay so both players must be indifferent between their pure strategies that means that the expected payoff from the pure strategies by the same if you're wondering what happened to point a somehow I thought I was going to cover it first but I'm gonna cover it in a minute okay so I'm emphasizing this point you'll see point a in just a second okay so help on the cops how can I be indifferent between my peers strategies I can be indifferent because they're both expected payoffs depending on the probability that the robbers are at work okay that's going to bring me to Part A if I'm the cops my expected utility is a function of the robbers probability to write that in the cops expected utility is a function of the robbers probability what does that mean let's start over here we don't know what the probability is that the robbers are going to be at work or at home we know it's going to be between 0 or 1 okay we know that because if it's 0 the robbers are going to be predictably at home and if it's 1 they're going to be completely at work and we know that they're not they don't want to be predictable okay so we know that a probability of 0 or 1 for the robbers is not going to be an equilibrium okay but we don't know what exactly the probability is so let's let it be a variable okay what Q equal the probability the robbers now go back part eight here q is the robbers probability and it determines the cops expected payoffs right let's see how that goes okay so the expected utility of the cops that's what I'm going to write the expected payoffs expected utility of the cops from being on the beat it's an expected value we're gonna calculate it just like we calculated the expected values before it's what I get from being on the beach when the robbers are at work that's two times the probability that the robbers are at work plus what I get from being on the beat when the robbers are at home that's 1 times what 1 minus Q yes ok here's point a the expected utility of my care strategy depends on the other players probability okay so um let's pretty that up so 2 Q plus 1 minus Q that sounds like Q plus 1 check me on this sounds ok all right so this is the expected utility of to the cops of being on the beat and what I'm saying is it has to be the same as the expected utility to the cops so I'm calculating the expected utility of each pure-strategy for one player and it's a function of the other players probability okay I've done it for being on the beat for the donut chalk it's the same story my expected payoff from being in the donut shop is the payoff I get from being in the donut shop when the robbers are at work that's negative 5 times the probability that the robbers are at work plus the payoff that I get from being in the donut shop when the robbers are at home times the probability that the robbers are help just as we did when we were using probability to solve nature nodes we're using expected values to solve nature nodes the probabilities always have to add up to 1 ok balancing different outcomes if the probabilities don't add up to 1 you've made a mistake it looks pretty bad of it as well negative 5q + 5 - 5q sounds like 5 - 10 Q that's alright you guys lose the spinner yeah so where am I in my outline point B here is the main logical point that allows us to calculate mixed strategy equilibrium that both players have to be indifferent between their pure strategies each pure strategy has to give me the same expected payoff point a reminds me of how to calculate the expected payoffs from a pure strategy that it's going to be a function of the other players probability so I've done that last thing in finding 1/2 of the mixed strategy Nash equilibrium is to put those two expressions together and find the robbers mixing probability the robbers probability of going to work that makes the cops indifferent between their two strategies okay so that means setting this expression equal to this and solving for Q the algebra is not hard the wind up the logic and remembering what to do is what's up what can be tricky okay so the cops are indifferent and they get the same payoff from all right out the tutor strategies from being on the beat or in the donut shop when Q plus one what I expect to get from being on the beat equals 5 minus 10 Q okay let's solve that that says 11 Q equals 4 Q equals 4 11s this is half of the mixed strategy equilibrium half of what we're looking for this tells us the robbers equilibrium probability why do I call it the equilibrium probability is the question and I'm pausing for a minute to see if I can give a better answer than because that's what it's called but that's sort of what it's just the phrase well here's a way of thinking about it what we're doing here sure that's okay what we're doing right now is the process of finding a mixed strategy equilibrium and so the broader question here is what we look for a mixed strategy Nash equilibrium okay we're not looking for a pair of strategies anymore such that neither player well we've got their choice given the other players choice because we know that there isn't one okay what we are looking for is a probability distribution over each player's strategies such that neither player will have regrets given the other players probability so what we're looking for when we're looking for a nasty big strategy Nash equilibrium I'm gonna say it more ordinary language now is we're looking for a probability for each payoff okay we are looking for a probability that the robbers are at work we found that we're also going to look for a probability the cops are on the beat and those two probabilities will be in equilibrium if given the other players probability neither player regrets their own probability regrets making a random choice so now you have talked myself around to a more coherent answer to Elaine's question the equilibrium probability here is the analog to an equilibrium strategy and the pure strategy Nash equilibrium that we've sounds down so far okay in a very general sense those pure strategy Nash equilibrium you can think of them as probabilities - but it just puts 0 and 1 probabilities on the different strategies when the probabilities are between 0 and 1 that's when we really have to emphasize that we're looking for equilibria and mixed strategies ok all right so what I'm gonna do is I'm gonna copy what we've got so far over here and then I'll erase to find the other half strategy Yoshi okay I saw q equals the probability robbers work it's very important when you're doing your homework and on exams to be clear which strategy you're representing with the variable which is gonna be 1 minus Q okay you'll get the same answer in terms of interpretation if you add Q be the probability of being at home and 1 minus Q being the probability of being at work okay you find that the probability of being at home would be 7 or 11 and probability it works before a loved it you'll get the same numbers but in order to keep track of what goes with what I highly recommend writing it out just like this okay so we want that and we want P the probability of the cops and we want these probabilities to be the numbers that will allow neither player to have regrets okay and what we found so far is that if Q equals 411's the cops are not going to regret making a random choice what's strange about the mixed strategy Nash equilibrium what's different than what we've done before is were calculating one player's probability based on the other players payoffs okay and that is just a fact about mixed strategy the fact that we're gonna be elaborating on during the week okay with pure strategies in order to find my equilibrium strategy we looked at my payoffs that seemed reasonable Micke strategies were down the rabbit hole in order to find my probability my equilibrium probability we have to look at your payoffs and vice versa so now we're gonna do the vice versa so I'm looking for Pete I'm looking for equilibrium probability these analog of the equilibrium strategy of the cops the probability that they're on the beat in order to find that what I'm going to have to do is I'm going to have to find the value of P that makes the robbers different makes them indifferent between their pure strategies but I'm not gonna light that whole thing when the value of P that makes the robbers and different so what I have to do now is I have to calculate the expected utility to the robbers now of each of their pure strategies and it's gonna depend on P just as the cops expected utility depended on Q okay so if I'm the robber we need to say what's expected utility okay my expected utility of being at work is the payoff I get when I'm at work and the cops are on the beat that's negative five times P which I've defined to be the probability of the cops are on the beat plus the payoff I get yeah I'm at work and the cops are in the donut shop times the probability that the cops are in the donut shop because if they're on the beat with probability P they have to be in the donut shop with probability 1 minus P and making that look nice it looks like it's 5 minus 10 P so that's what I expect to get from one of my pure strategies the other purse strategy is actually easier in this case the expected utility of the robber here doesn't matter okay I'm the robber I'm at home I don't care what the cops are doing I get a zero either way yes I was working and I didn't see the big picture and I said okay at zero times P plus zero times one minus P and just cranked it out like that I'd still get the right answer it wouldn't be a problem but I don't have to do that so the value of P that makes them in different is the value that makes expected utility from being at work exactly equal to the expected utility from being at home it's 5 minus 10 P equals zero sounds like P equals 1/2 now we're done this is the mixed strategy nash it this is the probability distribution for each player over there strategies i know that's a mouthful okay but all it means is the probability that the cops are choosing one strategy the top probability that robbers are choosing the another strategy and with these probabilities neither player is going to regret the choices that they made neither player can do better by making a different choice neither player can do better by playing a pure strategy and neither player can do better by picking a different mixing probability we've got these probabilities we are in equilibrium concretely what we're thinking here is the mixed strategy equilibrium tells the cops and robbers what spinners they should be spinning okay so it's a spinner for each player top spinner and the robbers spinner what we've found is we've said okay cops you've got an easy spinner it's half half okay it's the shaded side you do donuts it's the white side on the beat the cops could flip a coin too because they're making probability worked out to be one half for the robbers it's a little bit trickier um we have to have something like this I guess 411 being at work and 7-elevens yeah but I think I want to talk about now is I'm getting really efficient last week I was surprised good at the end of my outline cuz I hadn't been doing that today might be the first day that I go beyond that and what I think I'm going to do yes I'm actually I think I'm going to do is pause for questions for a minute no questions time will tell whether that's because it's so obvious or it's too weird emails questions about okay let me actually say something about the mixed strategy problem on the back of Laura your homework okay so you've got one standard homework problem here question one with five different scenarios scenario a through E and at the beginning numbered questions that I want you to answer for all the scenarios that's all about stuff from last week okay you guys fine with that it's all asking you about pure strategy Nash equilibrium I think the only thing that you wouldn't have known about this if I'd handed out in last Thursday as you would have known what a pure strategy was because we hadn't made the distinction yeah but just let me emphasize again pure strategy is when you pick one thing for sure the natural meaning of strategy that we've been we've been using so far the second part is a set more little drill problems no stories here just matrices with numbers to give you practice finding both the pure strategy Nash equilibria and the mixed strategy Nash equilibria okay so you could start doing those right now and get in the groove of doing what I did here using one players probability as a variable to set the other players expected payoffs equal okay and then using that relationship to indeed find the equilibrium probability for the first player and then switch the roles and do it again when you're looking for a mixed strategy equilibrium you're looking for a probability for each player another thing that to think about is what happens in equilibrium yeah yeleen Elaine asks how can there be no mixed strategy Nash equilibrium and I'm gonna give sort of a short and a long answer to that in the context of that question saying no mixed strategy equilibrium what I mean is no equilibrium we are there's some probability that a strategy is played if you get a probability of 1 for one of the strategies I would I would not most people would not consider that a mixed strategy actually Coulomb that would be a pure strategy Nash equilibrium but it actually begs a question that I think is is worth considering right now I'm kind of I'm thinking about what to do I don't want to start what I think of is kind of another big thing right now but this is a good point for me to address right now how can you how can you know if the game doesn't have any mixed strategy equilibrium and that's done so let's look at one and we'll see what happens here and lo and behold here's one and my friends in the back don't like it if I use this green for important stuff oh it's gonna be a prison this woman and well I'm doing the calculation parts you guys might just even be thinking to yourself logically how we might know that there's no mixed strategy Nash equilibrium the prisoner's dilemma okay so it's player a can defect cooperate player B in fact or cooperate payoffs will be - 2 - -5 0 put my same old payoffs here 0-5 1 ok just reminding ourselves is this a prisoner's dilemma the role players dominant strategy is to defect right negative 2 is better than negative 5 and 0 is better than 1 column player same story they both have a dominant strategy to defect they're both worse off and if they both play the other strategy the highest payoff for each player is when they defect and their partner cooperates and the lowest is the sucker payoff so yeah it's a prisoner's dilemma ok so let's just get to work on this using the mixed strategy recipe ok so what we would be looking for for a mixed strategy Nash equilibrium would be a probability Q probability that a two facts and P the probability be the facts you might suspect that these are gonna be related to each other since the game so symmetric in this case so if I want to find Q the probability that a two facts what I have to do is I have to find the value that makes who that makes be indifferent right Q is a's probability Q is gonna affect B's payoff so the expected utility to be the fact is going to be what I get I'm be from defecting yeah a defects that's negative 2 times Q I just defined is the probability that um Nate affects plus what I get if a cooperates what I get I'm be look at the column players fine the expected utility to be cooperating is what I get if I cooperate in 80 facts - five times the probability that a tax that's Q plus what I get if I cooperated a cooperates times the probability that a cooperate that's one minus Q so I got negative five Q plus a Q that sounds like negative for Q minus one getting past the witching hour or the wind algebra skills such as they are deteriorate so the value of Q that makes B and different is gonna be the value of Q such that negative 2q equals negative four Q minus 1 - 2 equals 4 Q plus 1 okay so bring this over I'll get negative 2 Q equals 1 Q equals negative 1/2 hmm did I make a mistake this time I didn't I don't think I did this is the case where what this unreasonable result is telling us is that there is no value of Q that makes sense as a probability that would me be indifferent okay so if you're looking at those games on the homework and you get a results like this where the probability doesn't make sense why doesn't it make sense it's not between 0 & 1 okay a probability yes go back and check your work you might have made a mistake it's possible but you might not have made a mistake and it might be the game telling you something okay this is a game where the players are never gonna randomize and actually any game with a dominant strategy is never gonna have a mixed strategy equilibrium if you have a dominant strategy it's a dominant strategy it's always better no matter what the other player does there's not gonna be anything the other player can do to make you indifferent between your dominant and your dominated strategy right that's what it means to be a dominant strategy so games with dominant strategies are one example of games that never have mixed strategy Nash equilibria what see what you wondering how do things um shake out here let's sort of do a table yeah your strategy Nash equilibria yes no mixed strategy yes no doing some examples here what we've seen is that the prisoner's dilemma I'm doing a two by two but it's not a gain matrix it's just you know categories here the way you'd have a two-by-two table and statistics here there are games like the prisoner's dilemma that do have a pure strategy Nash equilibrium don't have a mixed strategy Nash equilibrium there are games like cops and robbers that don't have a pure strategy equilibrium that do have a mixed strategy there are some games that have both assurance battle of the sexes I might do an example along these lines for you question is are the games that have neither okay and there's sort of a well what's the answer here empty as long as of strategies is finite so what am I seeing here as long as the set of things you can choose is like has a finite number of things okay you can't choose from an infinite set of things you're guaranteed to have one type of Nash equilibrium okay you might have both okay or you might just have mixed strategies you might just have pure strategies so the paper that John Nash published that earned him the ability to put his name in Nash equilibria was a paper proving that this was true okay so that that proof is what made the idea of Nash equilibrium seem so appealing that as long as you have a finite set of strategies both players have a fine status no strategies one type of Nash equilibrium will exist we can always find one the question is how big qualification this is the way to understand these kind of qualifications is to ask yourself what would be a case where this wouldn't be true your thoughts on that where the set of strategies that you would be choosing from with the infinite it's kind of a hard thing to think of on your own I think I was really wondering if anybody had it in the class if your strategies involved choosing a point on the line and we reuse these kind of games in political science a lot games where you're using the real number line to represent say a candidate choosing their position am I going to be you know far left or far right or middle-of-the-road and if you really think that I could choose any point on this line then my strategy space is finite in some of these games of political competition of finding your position on a line and my opponent finding their position on the line will fail to have Nash equilibria this stuff all the stop is beyond PS 30 but I think it does sort of help you see in context so in PS 30 we are interested in these cases if you are interested in this kind of issue like big broad classes of games where you can or cannot find Nash equilibria the math department game theory class so that's something that they would make much of I think it's nap 165 so that would be a of course to check out for that okay all right so the other thing I want to do today is do I still have cops and robbers here yes I do all right because I'm not done with it I'm gonna go back to cops and robbers and I want to think about what what the prediction would be here right now mapping this game that has a different kind of equilibrium different in some ways but the same in others okay the mixed strategy equilibria are different because of what we are looking for when we're looking for mixed strategy equilibria we are looking for a probability for each player that's what's different such that neither has regrets given what the other was doing that's what this okay the Nash equilibrium idea always has this idea of me not having regrets given what you did and you not having regrets you not having a better choice given what I did okay just in this case we're allowing both of us to choose randomly but randomly doesn't mean any old choice okay yes in cops-and-robbers you're the robbers and the cops are choosing randomly but they're choosing a different probability than P equals 1/2 okay if the cops are we're gonna go over here and think about something that would be out of equilibrium suppose P equals one-fourth okay are we getting some kind of lazy cops here they really don't like being on the beat they want to be in the donut shop and let's say there that's their mixing probability they're choosing at random question is I'm the robbers am I going to be would like to choose random as well well let's see expected utility the robbers of being at work now is negative five times P okay negative five is what I get from being at work when the cops are on the bead and I'm saying in this out of equilibrium scenario I'm saying that the probability is one-fourth plus five times three fourths okay that's what I get yeah um I'm at work and the robbers and the cops are on the beat multiplied by the probability I'm sorry the cops from the donut shop I get the good payoff and the probability of the cops are in the donut shop okay so this is negative five plus fifteen fourths ten fourths or five okay this is my expected utility and the robbers am I gonna spin my spinner no I'm not gonna choose randomly if I can get an expected payoff of five halves from going to work and a guarantee of zero from staying home I'm going to work okay and then if I'm going to work we can't be in equilibrium because if I'm at work and whatever sell the cops when they spin their spinner ends up and one of us is gonna have regrets okay yes I'm at work which is what I would prefer to do and the cop spinner tells them to be on the beach while I'll have all min ex-post mistake but if the cops are in the donut shop the cops are gonna be unhappy so I'm not indifferent if the cops are playing the strategy robbers will do better by playing so again to emphasize the interpretation here in a mixed strategy Nash equilibrium both players are playing randomly and neither can do better by being predictable but by randomly I mean a specific probability ok mixed strategy equilibrium there is usually yes there's a specific probability for each player that will bring the system into equilibrium what's weird about it what's less intuitive and about pure strategy Nash equilibrium is that it's the robbers probability so it makes the cops willing to randomize and it's the cops probability that makes the robbers so if the cops change their probability if the cops instead of their equilibrium probability of 1/2 the cops pick a different equilibrium probability it actually doesn't affect the cops payoff okay so the cop is the cops make a mistake it doesn't hurt them and that's what makes mixed strategy equilibrium in my mind stranger than pure strategy equilibrium but it brings the system out of equilibrium by making the robbers no longer willing to play the mixed strategy and vice-versa yes the robbers get it wrong and they mix with probability 911 instead of for a lot of innocent probability 911s they go to work it doesn't affect their payoff the reason why it doesn't affect their payoff is that if we change just the robbers probability but leave the cops probably going to the same ok the robbers are still indifferent between their two peer strategies right the robbers as long as P equals one half the robbers are getting the same payoff from being at work as they are at home so if I change Q I'm not changing the robbers expect to pay off at all okay because what they're kidding is the same in either case okay as long as P is in equilibrium all right so that point that I'm ending with I'm going to say some more things about but also approach from a different point of view
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_7_UCLA.txt
okay I've got homework 3 here going to send that around and I see homework 2 is coalescing up here at the front and okay um just to kind of give you an overview of what we're going to be doing the next two weeks we've got the midterm coming up the midterm is februari 7th that's a week from Thursday okay so how the rest of the class schedule will work relative to that is you're getting your last pre midterm homework today you can actually do a fair amount of the homework right now and all if somebody will remind me at the end of class i'll let you know whether you can do all of homework free or just part of homework for you by the end of class today on thursday i will be giving you a study sheet for the midterm okay so that's this thursday that will have some terms to review some suggested problems in Dixit and ski and some other problems for you to practice what so that's this thursday the midterm will cover the topics that we cover up to the end of class on thursday okay so if i start new topics a week from today those won't be on the midterm it's quite possible that what's on the board right now will be what's covered on the midterm that's my best guess um if it's less if we go slower on Thursday I won't put questions on the midterm that refer to material that we learn next week that seems a little why a little fast anything else on the homework you'll hand in your homework next Tuesday to your TAS you will not get your marked homeworks back before the midterm but we will in contrast to past week we will hand out the solutions for homework 3 in class on Tuesday okay so even though you won't get your marked homeworks back you will have your answer keys to look at so next week if you haven't already been saving a copy of your homework to study from I'd recommend doing that ok the the copy that you turn into the tas is not going to be available to you to prepare for the midterm it's a good idea to make a copy so that you have one so any questions on that stuff good alright so on we're going to start in today with a completely new game new game but a little bit more on an old topic of Pareto efficiency and the first version we're going to do this game is actually quite simple it's simpler than the fundraising game this is a foreign aid game okay this is a game between the government of a rich country I'm going to sort of emphasize that it's sort of a game between two countries but for the point I'm making I want you to think about the game being between the governments of the countries not the people countries okay the rich government has a choice of sending aid or no a foreign aid to the poor country if it sends no aid there's no strategic interaction okay so I mean this is a very simple game if the rich government doesn't send aid nothing happens they continue along with their baseline payoffs here if the rich government does 10 aid new the poor government again it's the government of the poor country how's it choice about what do with his aid money okay the poor government in this scenario can spend that aid money on education for its people or limousines for its government officials okay the payoffs that go with this the rich government likes the idea of education in the in the poor country it thinks that saw and bring that country out of poverty and improve living conditions there it's kind of what it has in mind by sending aid okay so the rich government's payoff from sending aid if the poor government spends that our education is higher than the payoff from not sending aid okay so here's a very clear illustration of something that I said at the beginning that the payoffs in a game don't have to reflect self interest okay in this case I'm saying the rich government really does want to help the people in this poor country and it's being reflected in the higher payoff here okay the poor government also thinks that education for its people is a good thing okay higher payoff here for both players the other possibility though is that the poor government spends eight money on limousines and the rich government does not like that for a variety of reasons you could think about the rich government just being out the money it doesn't like that placing no value on limousines I think it's more likely that a rich government that sent foreign aid to a country whose government used it for what seemed to be pretty crop purposes would pay a penalty okay so bad payoff to the rich government if the egg gets spent on limousines the poor government however likes limousines a lot they think they look real good driving around in those Mercedes's they get a high payoff from yeah okay there's an easy game right what's the choice at the poor government's note limousines very clear here strategic equivalents us is negative 25 I'm the rich government what do I do no aid all right rollback equilibrium here is the rich government equilibrium strategy is no aid the poor government strategy is limousines if aid okay but in equilibrium the poor government is never going to get to make this choice okay the rich government's not even going to send the aid because can't trust the poor government is the outcome in this game the equilibrium outcome Pareto efficient no it's not okay it's an even starker case of Pareto inefficiency then we saw in last week's theme just to repeat in order to say that's Pareto inefficient all we have to do is find some other outcome in the game regardless of whether we think it's the outcome that's going to happen we think this one's going to happen but if there any other outcome in the game where one player is better off and no other player is worse off that's enough for us to conclude that this outcome is Pareto and efficient now in this case it's real easy it's not just that one player is better off both players are better off in this other outcome it doesn't occur okay this outcome Pareto dominates equilibrium outcome it doesn't in order for us to say that something's Pareto dominated a Pareto and efficient it doesn't have to be better for both players but if it is better for both players that we certainly know that the one that's worse for both players is Pareto inefficient so this scenario here is I think a very stark example of something that sort of showed up in the fundraising game the poor government would actually be better here would be better off here if it didn't have a choice okay and this is an important issue that we see in strategic situations in non strategic situations in market situations especially we think having more choices is good it's always better to have more choices not necessarily when we're interacting strategically the poor government if the poor government didn't have the option of squandering the a the poor government could get to a higher payoff so if the poor government could do something to commit itself to not buying those limousines it could actually get a better payoff for itself so I want to say a couple more things about Pareto efficiency and then one more thing about this game time one thing about Pareto efficiency is as just as a criterion okay so I'm moving away from the bad statement here of saying that this outcome is Pareto inefficient to just talking about the general concept of Pareto efficiency of evaluating a situation in terms of the Pareto criterion in terms of this idea can we make at least one person better off without making anyone worse off one of the main punch lines of Thursday's lecture last week was that that that is a minimally good criterion okay if you can make a person better off without making in veils for us off you should do it if you don't do it that's bad in a very minimal way okay so saying that something is Pareto inefficient is identifying what's usually a pretty controversy uncontroversial problem about it okay another way to look at that point is that if something is Pareto efficient it means there's some other alternatives some specific alternative that nobody would object if we could move to it okay if we could somehow move from 0022 to nobody in this game would oppose it okay so making a Pareto improvement should be uncontroversial the flip side of that though is that seeing that something is Pareto efficient saying that it now can is Pareto efficient isn't saying very much in its favor okay singling out cause Preto inefficient unambiguously bad saying that something is Pareto efficient well we've ruled out one way that it can be unambiguously bad but there's lots of other ways that an outcome can be bad the Pareto efficiency won't pick up okay so we have a situation where the society is this whole room okay and I'm really rich I've got the limousines and the houses and everything else and you guys are all on the brink of starvation taking away one room of my house and giving it to one of you guys would be a Pareto would not be a parade of improvement okay it would make me worse off okay the situation where I have almost everything and you guys have basically nothing is Pareto efficient okay it's not a good outcome it's not something that most people would think is fair okay so the limit of Pareto inefficiency is it's not going to help us make distinctions about fairness and a couple people came up after class on Thursday and sort of pointed that out in the fundraiser in the fundraising game that even the Pareto improvement about the outcome in that game was one that made the player that was already better off better okay and that that seems sort of unfair about it not that is a reasonable limitation of the Pareto criterion okay if you'll remember from the fundraising game the parade of improvement the air was that for our predicted outcome which is that the incumbent raises funds the Challenger doesn't in the incumbent wins the parade of improvement was that the incumbent wouldn't have to raise funds either okay that the income but would still win the election the Challenger would still not raise funds the Challenger would be no worse off but the income but would save the trouble of fundraising now what you might think about that is yes the situation where the incumbent does have to raise funds seems Pareto and efficient but that's not a Pareto and efficiency that I'd give a real high priority to okay because the person who benefits by correcting it is the person who's already pretty well-off notice though to make that kind of value statement I do have to get to the point where i'm comparing the utility of one person to another okay and i'm making kind of a big deal of this now because over the last two weeks i was emphasizing that care needs to be taken when we compare payoffs about plate across players okay then in particular it's one thing to say that the rich government payoffs are 0 2 and negative 2 and that that's reflecting some kind of reasonable assumption about what we could know about a rich government saying that the poor government's preferences are measured on a scale that we can compare to is a much much bigger assumption okay the point I've emphasized up to this point is we don't ever need in game theory in solving games we never need to compare one person's pay off to another person's pay off okay that said sometimes when we're outside of game theory when we're thinking about what's going on in the world and do we think it's fair do we think it's just do we think it's a good outcome in those cases we may actually be comparing the utility the well-being of one person to the well-being of another okay if we say that we never do that any time in our life we're basically ruling out our ability to make any kind of judgment about fairness and I don't want you guys to think that ah but I think you should believe that or even that I believe it i don't i think the judgments about fairness are completely in order they just require more care and i would say more honesty about assumptions more than what we need to solve games alright so whenever we say something's Pareto and efficient we've identified a problem maybe it's a big problem maybe it's a little problem but it's usually a nun controversial problem saying that something is Pareto efficient is not saying it's fine situation okay it's just saying there's one particular type of problem that doesn't exist at one particular typo'd on controversial problem that doesn't exist all right given its limitations though it's still going to be a focus for us because there are so many strategic situations where the outcomes are Pareto inefficient and when you see those kind of outcomes in real life if it's a situation you care about the Pareto and efficiency is probably going to bother you it bothers people when it seems like there is an uncontroversial solution to a problem and it doesn't get implemented okay that's what it means to stay in a Pareto inefficient state game theory is very good at helping us think about ways out of these Pareto inefficient states because we can ask ourselves well how could we change the game tree how could we change the structure of interaction in a way that would get us to a Pareto efficient outcome how could we do it in that particular scenario I'm just thinking concretely with rich governments and poor governments no takers on that let me show you a different version of this game then see if that top brings up any ideas I'm going to leave this version of the game here and right next to it i'm going to put really a more general version of the same okay this kind of game i'm calling it the foreign aid game and use my blue here it's a special case of what is sometimes called the honor trust game and it's related to the prisoner's dilemma game theories all-time greatest hit some of you guys would probably encountered it in other classes game that we're going to talk about mostly after the midterm ok so the honor trust version of this we just have player one's choice is to trust player 2 or not if player 1 doesn't trust player to they steer clear of each other there's no interaction they go their merry ways with the payoffs of 0 and 0 if player 1 does trust player 2 then player 2 has the general version of these two choices player 2 can honor the trust both players get a higher outcome here a player two honors the trust keeps the agreement or player 2 can betray player one in which case we'd get payoffs like that honor trust games are a huge feature in lots of employment situations okay player one could be an employer who wants to hire somebody to help him with a job could be a contractor hiring a helper it could be a professor hiring research assistant it could be a mom hiring a nanny whatever it is that player 1 has to trust player 2 to do the job that she wants to do but she wants done okay and in many cases player 2 would be better off doing the job getting paid for it having that little bad employment relationship work the way player one wants it to the problem is the player 2 is tempted to betray player 1 the contractors helper is tempted to not do a very good job the research assistant is tempted not to double check the figures the nanny is tempted not to take good care of the kids all of these things would involve who trained the trust it would make the first player sorry that she trusted the second one and the problem the problem arises when the second player gets a high payoff from a shot that's a good answer to the question that I posed at the beginning okay so the question I had posed when we left this game and went to this one was what could the rich government and the poor government do to get out of this situation and you guys weren't quite ready to think about rich governments and poor governments but I thought something a little closer to home might like it you thinking and what she says is in lots of these situations the game doesn't end here okay if player to betray the trust of player 1 rather than just getting payoffs right here we could be in a game where player 1 has another move which is punish or not okay um the mom can fire the nanny and then the question is whether the punishment is worse than the the fun of the betrayal all those situations are one where the player could be fire where negative references could be held out as a punishment depending on the particular contacts player one could be not player too that's effective but may lead to other data retaliation down there so one way out of this kind of honor trust relationship is to have a possible punishment okay I'm not going to go through the whole logic here there are versions of this in Dixit and skeith but one thing I think you can probably see even without me solving the game is that depending on the values of punishment we could get to the good equilibrium here and the punishment would never have to occur okay the punishment the good punishment will be off the equilibrium path it will never have to be exercised it's got that same irony that this game has in it you might think that giving player one the opportunity to punish player two would be bad for player 2 but actually not okay if player 1 has the opportunity to punish player 2 then that could be enough to move us from this equilibrium path to one where we actually do see honor and trust any other thoughts on how to get out of the bad equilibrium here other changes yes you're wanting me to sort of spin out this what's your name Kyra Kyra is saying how does this exactly work I've got this node floating around here laughs um you know what let me think about that I want to do an example that shows you what I want you to get out of it and the example I sort of got going in my head has too many loose ends with it so if I get real confident i'll do an example by the end of class today more likely i'll do it on thursday i'll come up with one that works and check it twice before I spring it on you guys okay so no they are nice green marker to myself any other thoughts on what what we could do Stephanie Stephanie is saying if somehow the poor government could put a condition on the aid and as I would say there's sort of two flavors of that one is kind of the flipside of kiaras point about punishing betrayal an alternative way to do that is rewarding good behavior okay so you spend it on education and guess what we give you more money okay so and that's that's going to be something that will loom large throughout the class that the idea of future rewards and punishments can um can get us out of these bad situations um another flavor of Stephanies point another way that the rich government could just sort of take away the ability to squander the aid would be not to give them money send books and teachers and things like that that are kind of like hard to turn in to limousine and that would be a case so I guess it's possible you could put them to work you know building cars but it might not be quite the Mercedes you had in mind if you just don't have a choice here again that it sort of seems obvious but the interesting thing is I took away some of the choice of the poor government and I got the poor government itself to a better outcome okay um that can happen in the employment scenarios to an employer who's worrying about an employee betraying her might try to take away the opportunity for the employee to do the betraying you're worried about the nanny talking on the phone to her friends all the time um I understand cell phones you can't really take away the phone doing something like monitoring the the second player the person who's to whom trust is extended one other way um that I can think of that players in the real world try to get out of this game and this it's going to say this requires mostly work on the part of the first player but not necessarily employers and rich governments can try to select who to send a to who to hire okay employers often try to screen their prospective employees and prospective employees go to some work to show that they have a history of honoring trust that there are people who don't want to betray trust to begin with an awful lot of the communication that goes on between people or governments or firms that find themselves in this kind of situation a lot of the communication involves the player who could betray the other one trying to give solid evidence to the first player that they wouldn't want to do it but I don't have a payoff of five here that know my payoff from betraying you I have a conscience my mom raised me right my payoff would be really really low here um so that they're for my choice now I've added the negative I would feel so bad if I betrayed you it would be it would hurt me more than it would hurt you okay that would put us on that's equilibrium half okay so screening to find a player with preferences that promote honoring trust rather than betraying a trust okay so what I'm going to do now I'm going to start talking about uncertainty in games I've been sort of building up to this over the last couple weeks and I'm actually my example with uncertainty is going to partly address Kairos question what you're going to see in this version of the game is not so much the rich government punishing the poor government but on something bad happening nonetheless that will work like a punishment okay so what I'm going to do in this version of the game I'm going to clean it up see if this changes the equilibrium ok so we've still got the base X the rich government can choose not to send aid we get our baseline payoffs there the poor government can choose education or limousines okay but now what I'm going to do is I'm going to do something we haven't seen in a game yeah and I'm going to talk through the story behind it while I do it now though if the poor government chooses limousines will leave this blank right now i'm leaving this no blank but i'm going to tell you what can happen there might be a riot okay so in this case it's not punishment by the rich government of who you think of as pub punishment by the population of this country that is really tired of seeing those guys riding around with their machine guns and their big cars well kids don't even have textbooks in their schools they can riot or not okay I'm not putting the population is a strategic player in the game okay and there are reasons why we might not do that the decision to riot is something that may have some strategic elements to it p and though I am a strategic behavior i kind of think that when those riots get going that there is some pretty non-rational non-strategic sort of prime old people just get excited and emotions take over there okay so what I'm going to do is I'm just going to regard the riots as an act of nature this is what we always do in game theory when we want to think about uncertainty okay the way we're thinking about this here is I'm the poor government and when I get to the point of solving this node now I'm not just thinking about do I like education or do I like limousines I'm thinking do I like education do I like limousines and how do I feel if those guys are rioting in front of my palace I don't like it it as a matter of fact okay what's different about this nature node is whether or not there's a riot is not any players strategic choice it's just random okay it's an active nature it's not something that we can solve through backward induction okay we just have to accept the fact that maybe it will happen and maybe it won't okay so that is the key difference now we're going to represent uncertainty with these nature nodes but look a lot like decision notes okay it's a node in the tree it's got branches coming out of it leading to in this case other terminal nodes it will also in another game that we're going to see lead other decision notes so in one sense it looks like a decision node the differences no it's important enough that all right them over but the outline yeah differences between nature nodes and decision nodes is nature nodes are random not solved by some players optimal choice okay that's I would say the most fundamental difference it actually won't matter and we're solving the game and setting up the game the difference between nature nodes and decision knows is that nature nodes have probabilities on the branches so so far this nature node looks just like a decision no it's like okay well so nature can choose a riot or not if nature was a player I'd have to put payoffs for nature a nature would be making a strategic choice you have to be really superstitious to think that nature is a player that's your ad to get you or out to help you or something like that we we avoid that here we think that nature is just in this case will say nature's flipping a coin will say doing how likely do we think a riot is just as likely to have one as it is we are not to have one okay often when we're really uncertain that's what we mean that one alternative is just as likely as the other and i'm going back to black here to really underscore that the probabilities attached to a nature note are part of setting up the game okay so the probability here i'm writing it out equals point five and the probability of not having a riot also has to equal point five so when you're setting up a game and some part of the outcome is uncertain okay so from the poor government's point of view actually from the rich government's point of view too if I choose those limousines I don't know if it's going to be a riot or not I care if there's going to be a Wyatt I can't strategically anticipate whether there's going to be a ride or not I just have to accept the fact that I think a riot is just as likely as it isn't okay so and I accept that fact by putting these probabilities in here so yes I'm the poor government i choose limousines and there's not a riot the outcome is just like it was in the game before okay that's what we were assuming in the game before we weren't even worrying about Wyatt so the rich government's payoff is negative 2 in the poor government's payoff is five ok it's a good good pal there no riots got my next car fine very good and the poor government rich government oh those guys we gave them money they squandered it so embarrassing minus 2 all right if they're either Wyatt though I'm going to assume here that the rich government isn't affected ok the rich government is really annoyed about their aid money being spent on limousines the riot or not they it's all washing out to Nana ok so their payoff is the same here the poor government though is a little bit scared by this riot okay things are getting bad out there they're burning things my payoff is negative 2 here so a low pay off so this as i said is pretty close to kairos question about adding something that would punish a poor government what's new about it what's a different twist but what is actually going to be enough here is the punishment isn't certain okay nature in the form of rioting people could punish the poor government for the limousines but it's not going to happen for sure okay so what are we going to do now I'm switching to blue and again let me draw your attention here to how this game does and does not look like the games that we've solved so far it does look like the games that have not had uncertainty because all the events are represented by nodes and the things that can happen if the nodes are represented by branches we've got the branches labeled so we sort of know what they correspond to at the bottom we have payoffs that looks mostly like what we've done so far but one thing that is different is that now we have three different labels in our nodes but only to pay offs and again I want to emphasize here that when you add nature as quote unquote a player nature doesn't get a payoff okay nature is not happy about the riots are unhappy about the prep riots nature's not solving things backward nature's not strategizing okay nature's just going to in this case flip the coin okay about lighting or not so what's missing from the game here is payoffs the correspond to the decision maker at some node even though decision maker I'm putting in scare quotes here because not a decision it's just an event that occurs the other thing that's different that takes the place in our analysis of a payoff for nature are these probabilities okay so we're going to use those probabilities now to solve the game the same way we've solved games without uncertainty we're going to solve it from the bottom up we're going to start with the very lowest node and we're going to roll it back okay but starting with this node what are we going to do we don't have tie-ups for nature we can't figure out how nature would make the choice of nature was strategic what we do instead is the way we handle uncertainty in many many other situations we figure out what the expected value of the outcomes are okay so this is important to nothing i'm going to kind of add it to my list differences between nature nodes and decision nodes mitra nodes are not solved by optimal choice they have probabilities on the branches and last certainly not least probably most important of all we solve the nature node by replacing it with I'm seeing expected values but expected value of what expected value of the payoffs I'll write it out like that expected value lue of payoffs what we'll say for shorthand a lot of time is just the expected payoffs so what I'm going to do is instead of putting the strategic equivalent here you could think about the probabilistic equivalent what on average does the poor government expect to get from limousines hey what is the poor government's expected payoff that's what we want to replace this node with rather than the strategic equivalent right so in terms of strategic thinking in general the poor government is not superstitious it doesn't think that it can out strategize nature but it knows that it can incorporate the information contained in the probabilities and the payoffs to come up with an expectation and expect it outcome for this note and expected payoff for itself is the part of the outcome that it cares about okay and let's see I think what I'm going to do is calculate the expected value in the context of the problem first and then give you the formula let me just ask right now um so how many of you guys know how to find the expected payoff here okay that's pretty good how many of you really have no clue what I'm talking about okay that's okay there's the both cells filled that all right so this is what I'm going to do what I'm going to do is I'm going to say with probability point 5 hi I'm the poor government now in the limousine loving poor government here my payoff is going to be negative 2 and with probability point 5 my payoff is going to be 5 i'm going to multiply each possibility by its probability and add them up okay and this is my expected payoff respect to pay off here ok so just point 5 is one half so I'm going to get this whole thing over one-half negative two plus five see what I'm doing here ok just simplifying this so looks like three house huh this is what I expect to get one and a half okay a good way to think concretely about expected values in any context and statistics in finance any place that you encounter probabilities and gambling is to think about if the poor government played this game over and over again what would be the average winning okay that most people find it easier to think about expected values by running this thought experiment just a second of doing it over and over again yes of a poor the poor government okay so the way I got this expression was let me use a different color hi one probability here the payoff associated with it the other probability there and the payoff associated with that okay yes no it won't the next step is going to be to think about other probabilities here now point 5 is a nice probability and it's one where when you think you're kind of in the maximum amount of uncertainty you ask yourself what do I think is going to happen to I think one is more likely to their I just truly don't know point five is going to capture that level of uncertain because it needs both outcomes are equally likely you can't say it's probably this it's probably bad okay but we can use the same formula to think about cases where we think oh yeah riots much more likely ninety percent here ten percent we're going to do Elena's asking so we're always going to calculate the expected payoff for the poor yes we are going to do that but we're going to do it for the reg too okay so what we're going to do is before we would have had strategic equivalents bubbling up here now we're going to have expected values that will then feed into the strategic equivalent here so what we're going to do is we're going to do it for both players here but I'm starting with the poor because that's the payoff that's going to govern the behavior at the next note okay it's also true in this example the rich players payoff actually doesn't depend on the uncertainty ok they're going to get negative 2 either way but if the rich player had different payoffs here we would calculate her expected value ok all right so let me do a slightly more abstract version of this ok let's see how much yeah sorry I'm gonna have to go into a little bit of extra jargon here I just don't see any way around that and if you haven't seen it before I guarantee you you'll see you'll see it again learning how to think systematically about uncertainty learning how to use probability is one of the most important things that you guys should learn before you graduate college um I think more of you are learning it now in high school when I went to high school there was never any idea of teaching high school kids probability and that's changed that's a really good thing um as big a fan as I am of you learning calculus it's actually much more important to to learn probability okay thinking about uncertainty just happens all the time and it is a very hard thing to learn without a Klaus okay it's just slippery its back to the stuff I talked about on the first day of class it's not something that our brains evolved do very well okay our brains didn't evolve to process uncertainty in an unbiased way and many of you guys taken psychology classes and you know that's just kind of left to their own devices people will be wildly wrong in the way they'll assess probabilities okay so examples of that are that many more people are really afraid of flying than they are of driving on the 405 okay and the fact is you're in much more danger on the 405 statistically of being in a fatal accident then you are being in a plane wreck a plane wrecks just really don't happen very often the odds are very very low 405 you mean I on it twice a day most days but fact is people died there it's much more dangerous thing plan I'm getting to here is when we just are left or a natural way of thinking we don't systematically think of probabilities okay we don't think about them correctly so the formula i'm going to give you now for calculating expected value is going to look unnatural you're going to have to spend some time thinking about it if you're new to it but it's really really worth it okay it's an important way to an important cool that you can use to complement your intuition okay something that your intuition won't do well but will help you put your intuitions to good use I like so all that wind up is if is getting me to the formula for the expected value any expected value of a random variable this is what I'm hemming and hawing that I don't like having all the jargon here I know some of you don't like it and the ones who don't like it the most are probably the ones that need to hear it the most but a random variable is just anything that we're uncertain about if something is a random variable it's an unknown outcome with probabilities I find to each possible event ok so the random variable that we calculated the expected value of up here was the payoff of the poor ok the payoff of the four wasn't known but I had a probability assigned to each outcome ok so this is the sort of divide and conquer approach to uncertainty we don't know what's going to happen we make a list of the possible things that can happen and then we assign probabilities to them ok sometimes we have a good yield precision in the probabilities that we assign other times we're just picking approximate numbers that represent what we think is going on I would say more the case in this point five example okay so right now we're thinking about any old random variable okay and so the random variable is going to hat is going to be a list of possible outcomes and probabilities ok so if random variable capital X okay it's big because it's a capital X takes on values x 1 x 2 up to X n with probabilities p 1 p 2 all the way up to PN then the expected value of x is this this capital e like this is the expectations fine okay I mean take the expected value of random variable X capital access thing I don't know the value of and you do this by taking each possible value each thing that could happen I don't know which one is going to be yet and multiplying by its probability okay so the expected value is X 1 times p 1 plus x 2 x p 2 X 3 times p 3 dot dot dot until they're all used up now this is more than we need it for up here okay up here we had a random variable that just took on two values right so what we did was here it took on two values one possibility was a negative 2 we'll call that X 1 and one possibility the probability that goes with that was point 5 it's p 1 the other possible thing that could happen with this random variable is X 2 okay no riots got my limousine and the probability of that was p2 okay and we took those numbers and we put them into the formula okay I actually switch the order of the multiplication here but you guys remember that p 1 times X 1 is the same as X 1 times p1 all right so this is p 1 x 1 plus p 2 x 2 equals expected value of x of this random variable this formula is the one that many of you have seen in statistics classes probability classes any kind of social science that uses probability to help us understand what's going on any social science that deals with uncertainty is going to calculate expected values and we all do it the same way okay so this isn't any kind of special political science way of doing it it's not special to game theory it's something that's out there used ubiquitously for dealing with uncertainty um and we're using it in game theory the same way so this is the general formula let me help you guys sort it out you know if I could say that in all of our problems we're just going to have two possibilities but a lot of time we just have two possibilities okay so one special case of this is a random variable with two possible outcomes okay a nature node with two branches okay when it's a nature node with two branches the expected value is just the probability that one thing happens climbs that thing the probability we get a payoff of negative two times negative 2 plus the probability the other thing happens okay so this is the formula we have up there and if you look at this formula I think you could kind of see why it does what we need it to do when we're making a choice under uncertainty okay when we're we're the poor government we're choosing whether we want those limousines or not we need to be thinking about both how likely the riot is okay is the bad thing going to happen is it really likely to happen or not so likely to happen and how bad is it okay so we need to be thinking about both how likely the different outcomes are and how good or bad they are I think that when we make choices under uncertainty in the real world we balance those two factors right if your 5th today and you're trying to decide whether you should take an umbrella or not right two days ago they were saying it might rain tonight now they're not saying that what you're going to be thinking about is how likely is the rain is at eighty percent chance or twenty percent chance but you're also going to be thinking about how much do you hate getting wet okay is it a day when you have to be outside a lot do you have here that looks really bad when it rained are you carrying a lot of stuff do your shoes League all of those kinds of things are captured in the excess here so you both be thinking about the values of the things that can happen to you and he'll likely the art yeah Brandon yes yes yes Briana's question is do we need to do this for player 1 as well and he appropriately qualified at what with if this was something where it mattered for player 1 okay you can look at the game here and say well it's negative 2 either way for the rich government and correctly infer that on the rich government's expected path is known here if these weren't the same we'd have to do the calculation the even better news is if we're going through we're solving a way we're kind of in a zone and we just calculate the expected value here even though we don't have to we'll get the right answer okay it's like I should I think i'm going to do that right now so what i'm going to do is we've kind of had this little digression an important digression on calculating expected values in general i'm now going to erase and then erase all of the analysis I've done so far and i'm just going to solve the game okay put those things all together just have to rewrite that part of that so my choices at the nature node were riot probability point five not riot point five and um the payoffs here were I just erased them they were white negative 2 negative 2 is that right okay to negative 2 and over here it was negative 25 okay so now we're going to solve it from the bottom love we're going to solve it the way we've solved all games except that we're going to replace the nature node with the expected payoffs for both players okay so doing that what I'm going to do is I'm going to calculate the expected payoff for player 1 again I don't have to but I'm just showing you that if I put these numbers and these probabilities into the formula I'll get a sensible answer you don't have to worry about about that okay so if I have one half times negative 2 plus one-half times negative 2 that's my payoff my expected payoff for the rich player it's negative 2 plus negative 2 over 2 sounds like negative 4 over 2 I get my negative 2 there and I get one half times negative 2 i'm now on the other side of the column i'm doing the expected payoff for the poor player here plus one-half times 5 here so i get negative 2 plus 5 is 3/2 it's three halves okay so this is my expected payoff so now I've rolled up this part of the tree okay I've replaced it with the expected payoffs here so now I'm the poor government I'm looking forward I'm reasoning back word I'm saying do I want education which gives me a payoff of two or do I want limousines which gives me an expected payoff of three halves the three-halves balances the probability of a bad thing for me limousine plus riots and a good thing for me limousine with no riots okay so I compare the three halves to the two I compare the expected payoffs ok what's the poor government going to do education all right the strategic equivalent now is 22 what's the rich government going to do 8 okay so here's the case where this expected that's expected value with this possibility of punishment for player 2 changes the equilibrium that's what we did now the rollback equilibrium is the rich government sends aid the poor says education if aid what's changed is the possibility of punishment so um I think there is I have five minutes and i think i have a five minute thing that I can do that will enable you to get a really good start on your own on your homework okay I've said a couple of times today that this thing of calculating expected values is done outside of game theory it's a very general way to approach problems and one place where you'll see it in a way that looks very similar to game theory is in what's called decision theory okay so what's the difference between decision theory and game theory decision in this context means a game with just one player one player and nature okay and so you have an example of this is the very first thing on your arm homework okay sometimes people will say the decisions are a game against nature you're not trying to strategize another person so there's simpler in that respect but you are trying to deal with uncertainty and you're trying to deal with it systematically one place the decision theory it gets used a law and you mean I've actually encountered it is in health care and treatment options my HMO for example has a website that you can go to and for some kind of common diagnosis they can give you the probabilities of having a side effect from some treatment the probability that it'll make you better and they'll actually set up little trees to help you decide whether the treatment is a good choice for you or not in military planning sometimes military strategists are actually doing strategy they're thinking about the other army but a lot of military activity is just logistics you know getting things from one place to the other not sure what's going to happen how are we going to work together in the face of uncertainty and theory issues they are too okay so a decision is just a game with one player korea player you you can sit Elaine since a game between a player in nature you can kind of think of it that way um except that nature's not out to get you you got to remember that ok and I'm going to UM I'm going to quickly do a game here and I think i'm going to do today and it might help me illustrate something to later as well okay so this is this is a decision okay one player plus nature how to make a choice under uncertainty so that scenario i just gave do you bring your umbrella or not you don't know whether it's going to rain you have to pick the optimal choice that would be one choice um one that's more vivid unfortunately vivid for me right now is the decision a decision i made as a mom over the break and the decision was um I'm visiting my family in Colorado and two and watching my son doing some very foolish things well he was sledding um and my decision was to allow or forbid him to stand up on his little plastic sled and go over the bumps do I allow it or do I forbid it if I allow it Nature has a move okay they're gonna be a bad crash or not okay there's no strategy here um I'm making a decision by myself and this is a situation where um if I had yanked my son off of the slope he probably he would have obeyed me he's a pretty good kid okay so forbidding um my son from sliding let's call that the 00 now said 00 i did what did i do wrong here he doesn't get a payoff he's just a passive observer okay boy was that true by the end of the story okay so it's just my payoff here decisions have only a single payer okay um if I allow him to do is goofy tricks and there's not a bad crash well having a good time I'm standing there talking to my brother and dad and we're watching the kids and they're enjoying the snow and all of that um I got if I pay off out of that if there's a bad crash well it's bad it's its way bad okay the question is what do I do here okay do I allow the foolishness or do I forbid it okay what I'm going to do right now is I'm going to let the probability be a variable okay you were asking about whether the probabilities are always point five and in this case I actually don't think that a bad crash was all that likely um I didn't think that a bad crash was all that likely um but let's just let it be a variable P and then the probability of not being a crash when there's just two things as 1 minus P so question the back these are all the things that I'm factoring and you're wondering where the probability comes from so the factors that would go into my estimate of P thinking how good a sweater is my son hell I see is the snow how big is the hill what are the other kids doing is he a klutz or not all of those things okay what I'm going to I'm gonna let this be a cliffhanger now okay but one thing you guys could do right now is you could solve this and you could tell me how likely a crash could be for me to still allow him to do the sleigh okay so that's the question I'm going to leave you with how high can pb4 allow to be the correct decision and so that's where we'll pick up on Tuesday
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_4_UCLA.txt
full strategy so what I did was I made a handout okay we're going to go through that again and I made a handout that I think will um I hope will facilitate that so I'm going to start sending the handout around while I get the outline up on the board you can start that thank you that for all e okay so this is kind of a long outline first thing we're going to do today is talk more about strategies actions and equilibrium how those terms relate to each other we'll do a little more strategy counting um so that hopefully you'll get more comfortable with that um then we're going to talk a little bit about we're going to go to a sort of a broader View and talk about uh game theory in general uh the scope of Game Theory how broad is the set of situ ations that we can analyze with Game Theory what are some of the limits we might start talking about variable payoffs but the stuff down here that I can see some of you guys craning your necks to try to copy down very doubtful that we'll get this far this is more of a what we had for next week on the um on the syllabus okay also a reminder that uh your problem set is due on Tuesday okay the due date on this homework is right on the homework assignment right on the top just like it'll be on all of them you don't need to email me or your ta about that just check out the homework and it's there um in lecture okay so uh bring them with you to class to uh to hand in again all right final uh preliminary remark I'll put it up here I got a request to recommend some some extra problems beyond the homework doing this periodically um for where we're at right now my recommended problems would be Dixit and ski number one through five in chapter 3 uh I will at some point be able to post answers on the website for those probably not till late next week okay but uh even though it's only five problems the problems have a lot of parts so that will give you a a set of things to work on to help all right so what I'm going to do now is I'm going to turn to the handout that I just sent around on strategies in the fundraising game what became clear to me at the very end of class on Tuesday was that there were sort of two things going on that people were confused about one is the issue of what a strategy is okay someone want to tell me what a strategy is yeah NE a combination of players that an that a a combination of players let me say what NE said a minute ago a combination of actions that a player will take how many someone else else how many actions in a strategy yeah an action for each decision though very good what's your name Kenny Kenny okay so a strategy remember this from Tuesday is a bigger thing than an action actions are our building block the components of a choice are what of a set of actions I can take but if we're in a game tree and I'm a player that controls more than one decision node my strategy has to give me an action for every node okay so if it's a very big tree if I have six decision nodes each one of my strategies has to have six actions in it it has to tell me what to do at every single point that I could possibly find myself in okay so strategies are bigger than actions and an equilibrium is bigger than a strategy a strategy is a set of actions one for each node and equilibrium is a set of strategies one for each player okay so right now I'm going to still keep the focus on strategies versus actions I will over the business about equilibrium uh at the end okay so that's the conceptual issue and just from hearing the responses and generally looking at the faces I think you guys are comfortable with the conceptual issue there's this other issue that is not hard to think about but is an annoyance and that is how to write down a strategy okay that's I think where things got a little confusing at the end and that's why I thought the um handout would be a good choice the handout refers to our fundraising game which I think you guys probably have committed to memory at this point and what the handout does is it writes the strategies for the Challenger three different ways and basically we'll be using all three of those ways as I indicated on Tuesday and this is it's still true it was true true on Tuesday it's true now this middle way of doing it what I call abbreviation style one is the way that I would consider the default okay to me it's the way that we do it most often okay it's the in my mind a nice combination of enough words that you know what the strategy actually is but it doesn't take forever to write out what I think I may have overstated on Tuesday is I I think it's an overstatement to say that this is the only way you need to write strategies the fact is the reason why there are different levels of abbreviation is that sometimes you want a very tur abbreviation when you have a game with a lot of noes in it it does become very cumbersome even to write out Simply RF if RF RF ifn it becomes easier when you're managing a lot of nodes just to list the strategies in order okay that's abbreviation St number two this is the most abbreviated way to do it as I said before that's a way where if you're not careful you can lose touch with what those strategies actually mean okay another way to think about abbreviation style too and about abbreviations in general is that they are a way of saving space on a page they're a way of saving writing time but they're not necessarily a way of saving thinking time okay the shorter the abbreviation the more effort you have to put in to decoding it okay so the very abbreviated style while I'm talking I'm going to put our familiar game tree back up on the board just to to illustrate the very abbreviated style where you simply write the strategies write the actions in the order that the nodes occur is pretty tur and you have to take a minute you have to take a deep breath when you look at abbreviation style number two the first strategy here is telling me that the Challenger should raise funds at this node and raise funds at that node the second strategy is telling me that the Challenger should raise funds at the leftmost node not raise funds at the next one okay it's telling me exactly the same thing strategy two double Prime here is exactly the same thing as two Prime and two okay so the idea here is to give you this list of four strategies for the Challenger different levels of abbreviation to emphasize there different ways of representing the same thing okay there will be times in fact uh there may be a time in about 2 minutes when I go to the um the next game I want to analyze when the very abbreviated style is what we want to do okay when we really do want to be as short as possible one context came up at the end of class Tuesday when I was asking you questions hoping that some of you would call out responses and you guys in general are pretty good about doing that but it's a lot easier to call out llll as a strategy than lfl LFR lfm even and just doing it like that I me messed up the order in a way that made it a little bit unintuitive okay so get comfortable with using all three levels of abbreviation but when in doubt my recourse is to use abbreviation style number one and I would recommend that to you okay another thing I want to underscore that um I meant to put on the handout I'm actually not seeing it here so let me uh underscore it even more loudly is that the book has a good treatment of this exact subject uh on the business of writing strategies and Counting strategies see Dixit and ski Pages 58 to 59 okay same thing that I just went and over in the context of a slightly different game I think it always helps to see the same idea in more than one context okay so still not done with this I want to go back to that boring game that I put up on the board at the end of class Tuesday the fascinating choice of left middle and right so let's put that game back up and talk about those strategies for a moment okay so in that game we had player one with now a three-way Choice player one can choose left middle and right and then player two just chooses left and right this is the first version of the game that I had on the board not a little variation that I went to okay all right so again revising what we went over on Tuesday the first mover as in the fundraising game the first mover's strategies are easy to identify and easy to count the first mover has only one decision node so the first move has three strategies each strategy has only one action okay the second mover is a little more complicated the second mover's strategies have three components okay so we know that the second mover strategy has to have three actions in it what to do with this node what to do with that node what to do at the rightmost node okay so three components the total number strategies is these two combined with each of these two that's two times two okay so four possible combinations here multiply by each of these two the total number of strategies here is 2 * 2 * 2 is eight okay so eight total strategies for the for player two and on Tuesday we did not list them all so what I want to do now is just sort of systematically list all of them and this is a context that as I just alluded to a moment ago where I'm going to use the most abbreviated style okay I'm going to list all of her strategies okay and this is just how I'm going to do it when you're asked to list all strategies it's good to be systematic uh the way in which you're systematic there's more than one way to be a systematic I'll show you my way okay one thing player two could do is do left here left here and left here okay so that's one left left left another thing player two could do is left here left here and right there okay left left right another thing player two could do is do left here right here right there okay yet another thing would be right right right another one right right left actually let me start a new column here just to keep it I know I'm going to have eight so I'm halfway there let's put right right left up here but let's put semicolons in between so that we keep them separate right right left okay right left right right left left and what have I left out you tell me left right left very good okay there they all are there are eight these are all the strategies if we had wanted just to even use abbreviation style number one it would have taken a lot more time and a lot more board space but again let's translate this into first abbreviation style number one and let's pick a a more mixed one this strategy here translating right left right means the same thing is right if left left if middle and right if right this would be abbreviation style one abbreviation style one is a little closer to ordinary language because it tells us what player two should do and specifically tells us what action of player one corresponds to this okay so it's identifying the nodes of player two in terms of the action of player one that led to the node if we really wanted to say it in ordinary language I'll say it but I'm not going to write it this strategy would be okay starting now I'm saying the strategy choose right if player one chooses left choose left if player one chooses middle choose right if player one chooses right that's the end of the strategy that whole mouthful is the strategy I'm not done saying the strategy until I've said all of those parts of it okay all right on tues yes so that one complete strategy is made up with actions right that's exactly right that's exactly right one complete strategy three actions three different ways of saying one complete strategy okay that's that's why there's kind of a lot to get all together here but thinking about what the strategy is and thinking about how to talk about it on Tuesday I didn't put payoffs in the game but I think this would be a good game to also think about finding the equilibrium of and to underscore what an equilibrium is since it's an even bigger thing than a strategy so I'm now going to put pay offs again okay so it hasn't actually been a full game tree until the payoffs are in there so here are some payoffs 22 20 55 06 2 two and 1 three okay so this game has what six terminal nodes six different paths that the players can take and for each terminal node I've written down the payoff for player one first and for player two second now I have the whole game now I have something that I can solve and as is going to be the case with every sequal game that we do in polyi 30 in every sequential game that you do in the rest of your life we start at the bottom and work our way up we solve the game backwards okay we solve it by thinking about what if we get to this node what will happen what if we get to that node what if we get to that node we use what we find by analyzing the second nodes to figure out what will happen at the first one okay so we're solving it from the last thing that will happen up to the first thing we're doing that because we think the players are strategic and the players are thinking ahead if player one is thinking ahead she's not thinking do I like left middle or right better she's thinking do I like what's going what player two is going to do if I choose left if I choose middle if I choose right player one is trying to anticipate player to if we want to understand how she anticipates we have to anticipate also so if we get to this node what's player two going to do left player two is looking at player two's payoff two is greater than zero player two is going to choose left again to emphasize when I solve games my way of doing it is to highlight the branch that is going to be taken Dixit and Ski's way is to cross out the branch that is not going to be taken um you can do it either way as long as you're consistent okay if we get to this node what's player two going to choose right player two says wow six six is good I'd like to have six six is better than five this node what's two G to do right okay so now that I figured out what two is going to do in this lower part of the tree I don't need to pay any attention attention to it anymore I can just replace this whole set of nodes with the Strategic equivalent the Strategic equivalent is the payoffs associated with the optimal Choice okay so the Strategic equivalent of this node is 22 the Strategic equivalent of the med the middle node is 06 and the Strategic equivalent here is 13 okay so I'm copying the payoff that corresponds to the action that we think player two is going to take here player one in and now we're ready to figure out what player one's going to choose at her note player one is going to be thinking in terms of these strategic equivalents so what is player One's best choice left right player one can get a payoff of two a payoff of zero a payoff of one okay all right so now we've solved the game and we're at the point where we can write down the equilibrium that's maybe the final the final step here by solving the game we've identified what we expect to happen Okay we think that one is going to play her equilibrium Str strategy two is going to play two's equilibrium strategy and we're going to get to this outcome yes very good question what's your name kenyatta's question is does player one always just look at player one's payoff and the answer is yes okay kinata went on to wonder wouldn't player one maybe sometimes look at player two's payoff like especially in a case if player one had a tie do we think the player one might look at player two's Choice the answer is no now that seems a little bit excessively mean on the part of the players right I mean we hope real human beings are a little bit better than that if we think the players care about each other's preferences then what we would do is we would put something in player one's preference that would reflect player two's High preference okay and we can do that either when player one gets a higher payoff by helping player two or when player one is even meaner than these guys and gets a high payoff from hurting player two right that's a possibility too so if we think that the players care about each other's payoff what we do is we add something into the other regarding players payoff to reflect that okay so what we want what we want to be able to do when we're solving the game is to just have one number to look at okay so these numbers the players payoffs have to incorporate everything that goes into the player's preferences so if the player likes the other player the player who does like the other players payoff should be higher to reflect it if the other player is spiteful and wants to hurt the other player that should be in the player's payoffs as well okay so we could do that in the fundraising game Okay so we've if we were asked to predict what's going to happen in the game what we would predict we would predict about what we would observe is the player one would choose one player two player one would choose left player two would choose left we end up here this actually I think I'm going to switch colors um I've got a system here what's in black is setting up the game what's in red is solving the game and what is now going to be in blue is going to be kind of analyzing the solution interpreting the solution talking about the solution okay so one thing we can say about the game if we're analyzing it is that this would be our predicted payoffs and the predicted outcome is left left those payoffs another way to say that is that the equilibrium path is left left okay um you guys aren't going to like this but it's true here I'm saying left left and it's not a strategy okay it's not a strategy because on the equilibrium path I'm talking about player one choosing left and player two choosing left okay so uh sorry about that it's the way it is those actions they're fundamental we could put them together lots of different ways when we string together the actions of one player one for each of their decision nodes we call that a strategy but when we string together the actions for different players we can that that's the way we would specify the equilibrium path equilibrium path is not the same thing as equilibrium okay it's a narrower thing okay so the equilibrium in this game what am I looking for when I'm looking for equ an equilibrium yeah NE for each player that's exactly right okay so I'm going to just repeat repeat what NE said the equilibrium the first thing is it's a set of strategies remember equilibria are our biggest thing strategies are composed of actions and equilibria are composed of strategies well how big is the set of strategies the answer is one for each player can be any old strategies no the pl the strategies must be I'm going to write best responses I'll say some more about that best responses to each other what that means if you go back to your notes from Tuesday we I talked about this for some time on Tuesday is that given one player's strategy the other player has to be doing the best she can and the reverse okay so given player one strategy player two's has to be the best response and the best response again let me emphasize kenyatta's Point player two's best response means the response that gives player two the highest payoff for herself okay so looking at the best response from player 2's point of view you just at player 2's payoff it also has to be true the other way around given player two strategy player one's equilibrium strategy has to be a best response player one can't get a better payoff for herself by doing anything different than her equilibrium strategy that's what it means for the strategies to be in equilibrium given what you're doing I can't do better by CH making a different choice given what I'm doing you can't do better by making a different Choice the strategies are reinforcing each other yes noty that's exactly right the equilibrium path is not a strategy and actually even an equilibrium is not a strategy an equilibrium is a set of strategies an equilibrium path is a part of an equilibrium okay so it's sorry the taxonomy is getting so uh detailed here but good that you guys are picking up on it uh actually Let Me Maybe write the equilibrium path over here just below equilibrium okay so the equilibrium is our biggest thing we build it up from strategies then we can take apart the equilibrium and this is I think the point let's see if I can remember your name are you kiar wow okay good I'm not sure I can do that with everybody but I'm trying um what Kiara's point is is that you could take apart the equilibrium then in different ways the equilibrium path is the part of each players strategy that we expect to see okay so in this equilibrium over here let's go ahead and write out the full equilibrium and then find the equilibrium path part of it the full equilibrium is what we find when we solve the game back backwards okay we find the best strategy for player two because we're finding the best action for player two at every point that could occur in the game and we're finding the best strategy for player one same story so player one's equilibrium strategy is to choose left player two's equilibrium strategy is left if left right if middle and right if right I have one more color left so let's get one further layer of notes here this is player one equilibrium strategy player one has three possible strategies but only one is part of an equilibrium this whole whole thing is player 2's equilibrium strategy okay so of all eight strategies that player two has only one is part of an equilibrium okay because only one can be in the self-reinforcing relationship with one of player one's strategies and the one that is in that is the part of the equilibrium is this one okay right just abbreviation style two abbreviation style one there the equilibrium strategy of player two is choose left if player one chose left choose right if player one chose middle choose right if player one chose right lfl rfm r R lrr okay so from the most long- winded to the the turist there this is the strategy it's the biggest thing it's the strategy of the second mover in this case is composed of actions one for each node it's also true for player one but she only has one node and the equilibrium is composed of strategies one for each player so now we've gotten as big and complicated as we possibly can the other thing that we can do is we can identify the equilibrium path okay so now I'm switching back to black here the equilibrium path is what we expect to actually happen we don't actually expect to see player two choosing right okay because we're not going to get to this node we're not going to get to that node player one's not going to go there player one's not going to give player two the chance to choose 06 player one's not going to give two the chance to choose this one three either the equilibrium path is you could see it in the tree by actually finding the connected path that begins at the top node and goes all the way down to the bottom okay there's only one path here right it's this one and this one it's this part of the equilibrium okay so when we build up an equilibrium we do it one strategy at a time player one strategy player two strategy then when we think about it we divide it up a different way we divide it up into what we think will happen on the equilibrium path and then what's left is what is off the equilibrium path okay this is still part of the equilibrium this is the part of the equilibrium that lets us know why player one chooses left you might be looking at these payoffs and noticing that the equilibrium payoff 2 two doesn't necessarily seem to be the best choice for player one if player one chose middle there is this payoff of five for her right here so you're thinking why doesn't player one choose middle you tell me why do yes because it gives player two the opportunity to choose right what's your name back there Tiffany Tiffany got it exactly right the reason why player one chooses left which might seem like sort of a mediocre Choice here why settle for two for both of us when we could both have five Stephanie gives us the answer because if I'm player one and I choose left look what player two is going to do player two is going to double cross ask me yes we could both get five but player two I know what she's like she's going to choose R we're going to go here and I'm going to get zero I'd rather safely have my two so the idea here is that the off the equilibrium path part of the equilibrium we never think it's going to happen we don't think we're going to observe this choice or this one but it's our understanding of why things go the way they do okay why strategic consideration lead us to outcomes that don't seem so good all right I think I want to do one more example like this um was there a question there okay no let's do one more example it's going to be another of these exciting right left right games um this is going to make you guys not so sick of the incumbent in Challenger as you might have been on Tuesday and really happy to get back to that old example maybe not um but while I'm cleaning up the board here just kind of in your own notes take a look at that game and in terms of the number of actions the number of strategies what was this equilibrium yes okay so mostly the the question is how do we pick a baseline is um I've said a couple times the numbers are kind of arbit what's your name Kira Kira is saying well if you say that the numbers are arbitrary in the homework how do we start for the homework I've given you lots of numbers in the homework so you don't have to make up as many numbers as I do but as you're saying you do have to start with a baseline I recommend a baseline of zero there's no reason not to use a baseline of zero okay so when it talks about adding units of utility subtracting units of utility if you wanted to start at 75 you wouldn't get a wrong answer but why not give yourself something easy okay okay was there another yeah yes what's your name a question is is the equilibrium roll back strategy the same as the equilibrium let me all going to really split hairs here the roll back equilibrium is the same as the equilibrium so let's do a slot uh board right here roll back equilibrium that's what the book consistently refers to it as for right now is the same thing as what we're calling equilibrium okay it's also true that the rollback equilibrium strategy is the same as the equilibrium strategy the hair that I'm splitting here is that when we say equilibrium strategy we mean a strategy that's part of the whole equilibrium if we just say equilibrium or roll back equilibrium we're meaning a setup strategies okay so when we're talking about one strategy and saying it's an equilibrium strategy what we're the point we're making is that not every strategy can be part of an equilibrium uh a minute ago I had all those different strategies for player two in this game and there was only one of them that was part of an equilibrium okay what you'll find later in the course is that what roll back means is it's a process for finding an equilibrium we'll find the same kind of equilibrium when we get to simultaneous games but because the play there is simultaneous we're going to represent him in a way different than a tree and the process will be different than roll back okay but for right now same thing okay other questions other questions all right so all I'm doing in this example is I'm kind of not exactly switching the roles of player one and player two I guess I am I'm switching there uh the actions available to them at all of their nodes now I'm saying that player one just gets to choose left and right and once player one chooses then player two is the one that gets to choose middle or not left middle right left middle right okay so small change but let's see now how all of those issues work out how many strategies does player one have two how many strategies does player two have nine very good oh you guys got that right away okay it's three strategies here multiply by three strategies here nine different strategies okay um let's put some payoffs in there then and I'll just put the same payoffs in the same order so literally I'm going to have the same set of terminal nodes just as we did before we have six different possible paths that could take that the tree could take but um we will get a different equilibrium of course we'll get a different equilibrium because now player two's strategies are going to be different how many actions will player two have in each of her strategies three okay let's let's think about that let's think about whether how many strategies they will have anybody want to amend how many actions there'll be two right now player two just has two decision nodes so if we're going to you know it actually doesn't take that long to list player two's full set of strategies here okay I can do left left I can do left middle I I can do left right I can do middle left middle middle middle right I can do right left I can do right middle I can do right right they're all nine of them and you see they do have two components okay so this is the full set of strategies now I'm going to put those same basic payoffs in a different tree structure here okay 22 two 0 5 5 06 22 and 13 right this is such a nice pen I don't want to lose the lid to it before it dries out so solving the game backwards what's player two going to do if we get to this node and let me um emphasize that's a two there that kind of strange looking thing here that's a Z right yeah right five that's a great payoff I'm player two I can't wait to get five what is player two going to do at this node left six is even better love this game when I'm player two strategic equivalence then of this node 55 this node 06 I'm player one what am I going to do left if you were a player one would you rather play this game or the other one you'd rather play this one and it's kind of interesting isn't it in this game player one has fewer choices than she did in the other one right in the other one player one had a choice of left middle and right and player two had fewer choices but in this strategic situation player one actually does better in a case where she has fewer options okay because it allows her to get to a choice that's better for her and better for player two that's something that's going to happen in a lot of our games it's an interesting and I think kind of nonobvious aspect of strategic situations if you just looked at the two games side by side without solving them I think many people myself included at least before I uh learn Game Theory you would be tempted to say well player one always does better when she has more choices right that's how can how can you go wrong with more choices well you can you can go wrong with more choices if those choices can put other players in a situation that could hurt both of you okay that could hurt you so well we're going to be looking at some substantive examples that have that very feature all right uh just finishing up the details here to make sure that we're remember remembering what is the equilibrium and what is the equilibrium path okay let's I'm going to use blue it's a better pen than my green one when I'm looking for an equilibrium I'm looking for a full strategy for each player okay so my full equilibrium I'm going to write it out and words this time a little less abbreviated than before player one plays L player two plays R if l l if R okay so again equilibrium has a strategy for each player each strategy has an action for each node okay two players two strategies one node one action in player one strategy two nodes two actions in player two's strategy out of player two's nine possible strategies one is part of an equilibrium out of player one's two possible strategies jeez one here is a one is an equilibrium the equilibrium path again is the part of each player strategy that we actually expect to see okay so the equilibrium path we see in the tree by looking at the path of best actions that begins at the terminal node begins at the first decision node and goes all the way down to a terminal mode okay it's the set of connected actions here this is the equilibrium path the equilibrium path is also a combination of actions but now it's an action for each player at each of their notes okay in an equilibrium path you can actually have an action a player that has more than one actions if they have more than one round of nodes okay so if this was a more complicated game where player one made a choice player two made a choice and then player one had another set of decision nodes the equilibrium path would actually have two actions for player one this is the equilibrium path in the tree if we just look for it in the strategy it's this okay the part of each player strategy that we actually expect to observe what's in the equilibrium but not in the equilibrium path is the off the equilibrium path component okay sometimes we think of these off the equilibrium path components as threats that never get exercised Bluffs that never get called punishments that never need to be invoked and Game Theory really really draws our attention to these counterfactuals if we want to understand why player one chooses left it's not so puzzling with these payoffs but if we want to understand it we have to think about what would happen if she did something different can't say that often enough okay anybody want to say anything else or ask anything about else about left middle right okay yes that is a very good question question what's your name Kaitlyn Caitlyn's question is if there's a problem like this on the test can we just write down what's on the board should we do more writing you should do more writing okay on tests and on homework a on the side of more writing um and the way to think about this is um I don't know if anybody's watching those webcasts of the lecture but do the lecture with the sound off and see how much it makes sense so what you would want to write is not quite all the words that I'm speaking here but enough of the verbiage that is going along with this that somebody can follow the whole argument okay um so that I'm so glad you asked that because I think sometimes people come into Game Theory and think oh yeah this is a class about numbers and variables she really just wants me to do symbols no what I really want you to do is I want you to be able to use symbols and words together to be able to go back and forth between the numbers and the words the abstract representation and what it really means so your homeworks and your exam should have both of those things okay it is a good idea on your homework and on your exams to um to use color the way I do when you do it uh if you want to do like a little code saying um red denotes the solution of the game green denotes the equilibrium path that kind of thing can be very helpful okay so what what you want for your homework answers and for your exam answers is to have a coherent explanation of the question um another way to remember that format is uh this idea of translating a real world problem into Game Theory doing the game theory work now which now we know how to do that the find the equilibrium through roll back and then translating it back to words okay so it's both uh word and symbols are important yes so what's your name lilan Lillian's question is a path is a path an intermediate between a equilibrium and a strategy it's not exactly um the way I would think of it is let's I'm going to see if I can on fly draw a helpful picture always risky but we'll try okay so we have actions and I'm going to do them like A1 A2 A3 A4 A5 A6 these are actions actions come together to form strategies okay so we'll call this S1 S2 and then strategies come together to form an equilibrium okay now there's something different going on in the way actions form strategies and the way strategies form equilibrium okay we add up actions to find a strategy all we need to do is make sure that the actions belong to the same player and that they correspond to the choices at her different noes for equilibrium it's a little more restrictive right it's not just a strategy for each player they have to be strategies that are best responses but again there is this idea of successively building up lilian's question is how do we fit equilibrium path in here it would be something like this okay so in the context of a specific game the actions always come right from the story The actions really are the choices that the players can make in the real world um the strategies come from the game tree okay once we've got the game tree then we know how complicated the strategies are for different players sometimes they're very simple as with the first movers here they get more complicated typically with the second Movers in finding the equilibrium we have to solve the whole game we have to find two strategies that are best responses to each other okay two strategies where given what you're doing I don't regret what I'm doing and given what I'm doing you don't regret what you're doing okay we neither one of us has a reason to change then once we're done with all that we've got the biggest possible thing we've got this combination of strategies one for each player each strategy is an action for each node the equilibrium path is kind of cutting up that equilibrium in a different way one thing about the equilibrium path that I think is relevant for Lillian's question is it is the part of each player's equilibrium strategy that we expect to see play okay so an equilibrium path is composed of strategy Parts okay and it would never be the parts of a strategy that add up to a whole one it's always parts of different players strategies okay so it has to the equilibrium path a way to think about that is every action we expect to see played okay okay all right I'm going to erase this well you guys are thinking if you want to ask any other stuff about that and if not if you're already sure that you don't want to ask anything about that start thinking about this go back to the fundraising game and while I'm erasing let me pose this question to you how could that game be better you guys think about this and when I'm done erasing give me some ideas how could that game be more interesting more realistic just how could you improve it so a lot of times just left and right Elaine's question I think if I'm understanding your question correctly is the first players's strategy is always just the same as her actions that's right that's exactly right in these sequential games the first player well let me let me add one qualification to that as long as the first player doesn't have another move that's true okay if it was a game um let's kind of turn our attention back to the fundraising game if it was a game where the I don't know something important is going on out there uh the if it was a game where the incumbent could raise funds or not the challenger raise funds or not raise funds or not if this was a game where um if both players raised funds say then the race would attract media attention and say only in this case would there be a choice of the incumbent to go negative or go positive okay that's kind of an off the top of my head example but now the incumbent strategy even though she's the first mover what to do here and what to do there okay and depending on the payoffs it's possible that we would never get to this node okay it's possible that her part of these could have one strategy one action I don't want you guys to do it's possible that the first mover's equilibrium path could have just one action because this node might not be on the equilibrium path we might go down here but it's also possible we could go down here in which case the incumbent equilibrium path would also have two components to it so a lot of different possibilities there all right but there by itself one exle something we could do to change the game I'm just adding the payoffs back in here while I'm doing this make sure I'm doing this part right we're getting into you know the Bermuda Triangle of class where I start putting numbers in the wrong place here I think those payoffs right so there it is in all its Glory what are some what are some ways to do it differently yes isn't always a sure thing oh yes yes what's your name Paul Paul says raising funds isn't always a sure thing that I'm assuming here that the players not only can control fundraising okay but also they know it with certainty okay I think that was sort of implicit their um and I'm going to highlight business about the outcome not necessarily a first thing okay um I'm the incumbent I raise a lot of funds and the Challenger does to uh maybe in the course of the campaign something bad comes out about me or maybe my party is doing badly in the electorate in general but I might not necessarily get this okay so one concern about how much this game tells us about real Congressional elections well these actually there's two concerns here that maybe there are other players here okay other people donors for example might control how successful fundraising is voters might also control it in the sense of how much people respond to the kind of activities that funds are used for so players and outcomes not being certain other thoughts okay I'm going what I would like to do is kind of get a list of concerns about the game a list of ways to improve it and what I want to then do with the concerns is to think about which ones we could address just by Within game theory okay by changing the assumptions of this game and setting up a different game okay and Paul's concerns are both ones that we could address Within game theory versus ones that would really take us outside of Game Theory okay so there'll be I think some things in both categories but some other thoughts yes challer two equal players how do we know what's your name Brandon Brandon's question is who on first is it even clear okay I was say even in this example it's I made some sort of pitch for why it was clear but um you might think that in some cases in open seats say it would not be it would not be clear okay other thoughts yeah yeah yes ra funds but doesn't FS am so part of your question here is fundraising isn't really just a yes no thing it's um raise a little raise a lot raise a humongous amount um the way to think about in then General is that fundraising has many choices if you wanted to go to an extreme that wouldn't be that unrealistic is fundraising is a dollar amount okay you can specify how much fundraising to do to the penny right there a whole lot of actions there that's sounding a little hard to manage but actually that is something we could manage okay other other thoughts yeah in the environment what's your name Steph's point is other factors influence the second mover okay what she's saying is there's all sorts of reasons why the Challenger might decide to raise funds or not and the incumbent strategy is only one component of that okay that's a very good point and I want to add to it that the incumbent might not be able to anticipate all those other factors okay so one thing you might think of in this particular context is okay so I'm a challenger I'm running against an incumbent and I'm not dumb I know that my chances are not good I just I I understand politics I went to UCLA and I took 141b and 140a and I took ps32 and I know that it's not good to be a challenger I'm not going to win this election but that's not all I care about okay I'm taking a real Long View I've learned to look forward from Professor Bond who tells me to think about all the consequences of my action and I'm even thinking about the next election okay I'm thinking if I work really hard right now people are going to notice that maybe a lot of people won't but maybe some donors will notice it maybe people in my party will notice that even in a case where I wasn't very likely to win I was able to raise a lot of money I did better than expected okay maybe I lost but I got 35% of the vote in a that had favored the other party okay that that would be something that I would care about now if that was known for sure we could put that into the game what would make it even harder but we could still put into the game would be if the incumbent's not sure whether I'm that type of person or not okay whether I'm somebody who's going to raise funds no matter what because I'm thinking about my full career or maybe I'm somebody who no if I'm not going to win this election that's enough for me okay both both of those things would make the game more realistic and we could incorporate both of those into a game what I want to I want to do a little bit of talking about uh how we might incorporate some of these different things into different games how we might address some of these challenges some of these elaborations Within game theory many of them have have this issue of the players not being certain about things running through them the outcome might not be a sure thing it might not be clear which one of us is going to make the first move um the very general version of Steph's point is the Players might not necessarily know each other's payoffs in order to solve the game backwards that was kind of the key thing okay the first player had to know the second player well enough to predict her actions okay otherwise the business of finding the Strategic equivalent wouldn't be possible the incumbent up here is saying well I don't like to raise funds and I have no idea how my Challenger is going to respond to it then it's harder for that incumbent to be strategic okay this issue is one that I would say is right on the border of an issue that we can solve Within game theory probably by the end of next week certainly by week four we'll be talking about having uncertainty in a game what to do when the players aren't sure about what their opponent wants aren't sure about what their opponents can do aren't sure about the consequences of their own action that's a ubiquitous feature of the world and we are going to find a way to put that into the game what I will say is the more uncertainty there is the harder it gets to say something definitive about how they're going to make strategic choices I guess that shouldn't be surprising okay so with that um you guys are in good shape to do your homework problem go ahead and have fun with that and we'll see you on Tuesday
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_11_UCLA.txt
okay so um the first thing we're going to do today is we're going to talk a little bit about games with more than two strategies and the context that I want to put this in is to pick up the context from our last meeting on Tuesday which is that I'm showing you a set of related techniques for solving simultaneous games okay the easiest technique is to look for dominant strategies if you can solve a game by finding a dominant strategy for each player you're done that's it that's great that's the easiest way to do it we saw at the end of class on Tuesday though that not all games have dominant strategies okay so in the last uh version of The Roommate game that we looked at we saw a case where one player had a dominant Str strategy the other one didn't and there we had to do a little more reasoning to solve the game and the reasoning was just well the player that has the dominant strategy will play it the other player will play her best response to that strategy okay so we're now going to keep pushing at that boundary to cases where in games with more than two strategies there may be no dominant strategies but there will be dominated strategies I'll go into the detail details on that in a minute okay and so that's going to be a slightly broader set of simultaneous games that we can look at by eliminating dominated strategies but it's still going to leave us with a set of games that we won't be able to solve using the dominance relationship okay so what we're going to be doing by the end of class today is solving games looking for Nash equilibria in games where neither player has a dominant or a dominated strategy that's where we're going first let's pick up this issue of dominated versus dominant in the games that I was talking about on Tuesday each player just had two strategies so if one was a dominant strategy if one was a better choice for the player no matter what the other player did the other strategy had to be dominated okay the other strategy had to be not the best choice no matter what the other player did when you have a game with more than two strategies it is possible that there'll be no dominant strategy no strategy that is always your best choice no matter what the other players are doing but that you might have a dominated strategy a strategy that is never your best choice no matter what the other players do okay so our reasoning here is going to be if you have a dominated strategy a strategy that no matter what your opponent does is not going to be the thing you want to do it's not going to be the best response we don't think players are going to play their dominated strategies so what I'm going to show you in a minute with an example is a way to solve a game through what's called iterated dominance and a more uh friendly clear way to express the idea of iterated dominance is that we are just going to solve the game by eliminating dominated strategies okay all right so I think that's going to be easier to see in context okay so this game is a game um it's about competition in a market it's uh the store in this game is we've got two companies selling pizzas okay what each company is trying to do is to get the most profit and that's an important part of the game The Profit that they make depends both on the price they charge and how many pizzas they sell how many pizzas they sell is also affected by the price okay so price kind of has both a direct and an indirect effect here if I sell at a the higher price the higher revenue is going to increase my profit but if my price is higher than my competitors some of my customers are going to go to the competitor okay so we're going to set that up in a game this game is going to take a little bit of effort to set up you guys are used to that now that uh a lot of the uh hard work in sequential games was was actually in setting up the tree that's going to be true in setting up the Matrix here okay so what we've got with the pizza companies I think I gave them interesting names like pizza company A and B um they both have they both basically face the same situation okay uh I'm going to make a little table here for price and profit per Pizza okay if they charge a high price I'm not even saying what it is the profit per Pizza is going to be 12 okay that's that's pretty good I want to be in this business doesn't sound like the real restaurant business to me but this is the the numbers that I have a medium price is going to give them a profit per Pizza of 10 and a low price a profit per Pizza of five okay so here's the price and Pro price and profit story the other thing we need to know is what is the demand for pizzas okay how many pizzas are people going to buy all right and the demand here we're going to think about this per week okay I am going each company each has three [Applause] ,000 loyal customers o m e RS okay so loyal customers what does it mean to be a loyal customer it means they like my pizza okay if I'm Pizza a they're not buying from Pizza be no matter how high my price is they are loyal they like how I do it they don't they're not going to buy from um Pizza Company B but there's also 4,000 floating customers these are all college students they're going to buy from the cheapest company maybe you guys aren't like that maybe you have more sensitive pallets but um when I was in college and actually long since I've been kind of a floating customer pizza's pizza I want it to be cheap they will buy from the cheaper company okay so what makes this a strategic situation now is that if I'm Pizza a my profit depends on my price okay but it also through the floating customers depends on my competitor's price what I'd really like to do is have my price as high as possible but the problem is if I have my price as high as possible my competitor might come in and take all the floating demanders in you know 4,000 that's a lot of people I'd like to be selling those pizzas to them okay so what I'm going to do is I am going to um I think over here do a preliminary step is that how I want to do it actually I don't what I want to do I'm not changing my mind about how I'm going to use the board I'm going to have my main game here I'm going to start to set it up first okay then I'll go over here and do a little bit of extra calculation all right so this is a simultaneous game and this more than the examples I was doing on Tuesday I think really is a simultaneous thing the pizza companies say they print out their menus do their Flyers simultaneously when I'm doing my promotion for this week where I'm letting my customers know what my price is I don't know what my competitors going to do and vice versa okay so we're going to have me Pizza a be the row player and you guys Pizza Company B be the column player and now in this game each of us has three strategies the choice we make is what kind of price we charge high medium or low okay so here my Game Matrix now needs to be 3x3 instead of 2x two okay because there's nine possible things that can happen depends on my price and your price okay so we can both choose High we can both choose medium we can both choose low and there can be any other combination I choose medium you choose High I choose medium you choose low all the cells are pictured here okay in this game this is this is The Game Matrix the cells will contain payoffs okay in this game the payoffs are all about profit okay this is a game that's all about profit okay okay but to calculate the total profit I need to figure out how many pizzas each company is going to sell in each of these combinations of um pricing schemes okay and to do that what I'm going to do over on this board in blue is not the game okay just a step though I'm going to do that same Matrix or not the profit gain and what I'm going to do here in the cells is not have the payoffs but the cells show the number of pizzas sold okay so I'm doing this self-consciously as an example when you have a simultaneous game where the relationship between the strategies the outcomes and the payoffs is complicated sometimes you want to make yourself a separate Matrix that doesn't have the payoffs in it but has some intermediate step okay so this is like the simultaneous version of writing the outcomes above the payoffs before the terminal nodes in a game trat okay so so this is number of pizzas here depends on my price depends on your price okay so one thing that can happen is we could set the same prices we can both do high we can both do medium we can both do low now I didn't actually say over here what the floating customers do then I'm going to tell you now okay they split evenly if the prices are the same okay so I'm Pizza Company a I'm setting a high price you guys are setting a low price how many pizzas do I sell during the week I sell 3,000 I actually sell a few more okay I sell 3,000 to my loyal customers am I getting any of the floating customers I'm getting 2,000 okay so I'm getting five here and you guys are getting five too okay and because we're just doing the number of pizzas here price doesn't matter uh the fives are going to be true in all three cases where we set the same price okay these floating customers aren't paying attention to Absolute price they're not saying well we're not buying if the price is high they're just buying from whoever sets the lower price okay so they're going to they're going to get their Pizza no matter what okay what about here I said a medium price and you set a high price how many total pizzas I get 7,000 okay now my price is lower than yours all the floaters go to me that's good how many do you sell they have to really love your pizza to buy from you when you're charging the high price and I've got the medium price same story here with the low price right if I'm setting the low price and you're setting the high price I'm getting all the floating demand plus I'm getting my loyal customers you're only getting your loyal customers and it's true here too if I set a low PR price and you said a medium one my price is still lower okay all they're reacting to is which one is lower all the floaters go to me you only have your loyal customers and in these three cells the roles are reversed but it's the same basic pattern in these three cells now you guys Pizza B you're setting a medium price I'm charging a high price all the floaters go to you you get seven I get three and the same story in all these cells okay so this is our little helper Matrix that's going to now help us figure out what the real payoffs are okay neither one of us care about selling pizza per se we're in this business to make some money and we want to make as much as we can all right so now what we have to do given that we know how many many pizzas are going to be sold in each of the scenarios is we have to calculate the profits okay so let's just go ahead and do that in the upper Square where we're both charging the high prices we're both selling 5,000 so our profit is 12 per Pizza times 5,000 pizzas sounds like a profit of 6,000 profit of 60,000 a week like I said this is um not a realistic EX example uh number wise but uh makes it fun to think about these these big numbers in the medium so we're both still selling 5,000 pizzas okay but now our per Pizza profit is 10 so profits of 50 all along here and um 25 here I'm putting the uh profits in thousands yeah okay so in both cases it's 5,000 pizzas each times the per profit per Pizza profit 60 for a high price 50 5,000 time 10 for the medium price and 5 * 525 for the low price okay so now filling out the other cells here I'm charging a medium price okay I'm making $10 per pizza but now I'm selling a lot okay I'm selling $7,000 pizzas here so I've got my 7 70 there 7,000 pizzas times $10 per Pizza uh down here I'm still selling 7,000 pizza but uh per Pizza profit is only 35 okay and in this cell I'm just doing my payoffs now we'll fill in Pizza B's payoffs in a minute this cell from my point of view is no different than this one okay um the only thing that's different here is whether my competitor is really trying to gouge them with a high price or just setting the medium price either way my price is low I'm getting all the floaters I'm getting 5,000 per Pizza uh selling 70 I'm getting five per Pizza selling 70,000 um there's the the 35 here okay so what are you getting in these cells here you're selling 3,000 pizzas just to your loyal customers but because you are setting the high price okay now we're looking at the column players number of pizzas the column player strategy is a high price 3,000 pizzas times 12 per Pizza is a payoff of 36 right 12 times 3,000 and down here it's the same story okay the only difference here is in pizza A's strategy from the column player's point of view here the column player is still selling to all 3,000 loyal customers at the high price per Pizza profit of 36 here okay um what else can happen here the only other thing it's not going to be a mere image here is what happens when Pizza B sets a medium price and pizza a sets a low price the medium price gives me a profit of 10 per pizza and I'm selling 3,000 so I'm getting 30,000 profit there okay okay so I filled in this part of The Matrix this game is one that's symmetric um you guys will remember from Tuesday games don't have to be symmetric sometimes one player's payoffs can be systematically different from the other but this game like the prisoners dilemma um the two players really want the same thing face the same incentives so the outcome here when a sets a high price and B sets a medium price is just the reverse of what happens when a sets medium and B sets High okay so a doing the high price 3,000 pizzas at 12 per Pizza gives me 36 this is 70 and I'm just going to go ahead and reflect the payoffs here but while I'm doing it I'm going to keep the story going in my head because a very important thing in both setting up and solving simultaneous games is to get the numbers in the right places okay so I've got to make sure sure here that I am accurately putting the numbers in the right place okay so 36 is the profit that you get from setting High when your competitor sets low it's the column player's profit in this Square it's the row player's profit in that square The Profit you get from setting low when your opponent sets high is the column player's profit here is the 35 okay when I set medium and my opponent sets [Applause] low that is the same thing as when I set low and he set medium that is the 30 and from his point of view it's the same as when I set low and he set medium okay all right so just as with with sequential games the setting up a lot of the work goes into the setting up okay and especially this mapping from the combinations of strategies to how that affects what happens to the thing that we care about that can have a lot of steps and the finished game won't show all the intermediate steps but it's a good idea to do those intermediate steps on a sheet of paper so that if something weird happens you can go back and check your work see if it all makes sense help you reproduce the argument okay so here's my game and let's kind of go through the algorithm that I was developing on Tuesday let's look to see if anybody has a dominant strategy okay well I'm just going to start up here and I'm going to look at it from the role players point of view okay well high is definitely not a dominant strategy because right away here we see that player a does better setting a medium price when the opponent sets high so maybe medium's a dominant strategy medium is a better choice when your opponent sets a high price mediums a better choice when your opponent sets a medium price oh look here when your opponent sets a low price though high is actually better okay so medium is not a dominant strategy either for it to be a dominant strategy this row would have to have the highest number in every column okay it's got the highest number in this column the highest number in that column but not the highest number in this one okay so not a dominant strategy low also not a dominant strategy okay so no single strategy gives the highest payoff in all columns or in all rows okay we could do it from the background but next we'll look for dominated ra medium is always better than low and that that is right that's germine to the next point that I'm making it doesn't make it a dominant strategy okay and this is the thing that was the same on Tuesday on Tuesday if one strategy dominated the other the dominating one was the dominant strategy because there were only two now it's possible for medium to always be better than low in fact it's absolutely true that it is um but medium is sometimes a worse Choice than high okay so medium dominates low and so does high high also always gives a higher payoff than low okay so what we have here is we have no single good strategy that we can just say oh that's what they're going to go with they're always going to go with medium because there is actually a case where you could be sor sorry you went with medium if your competitor sets a low price and you're not going to get any of the floaters anyway you're going to wish you had more profit so you're going to wish you had done hot what we can say though is that low is never the best choice low is never a good choice no matter what the other company does you don't want to set a low price okay it's never going to be the best thing if you set a low price here you're settling for 35 when you could have had 60 or 70 here you're settling for 35 where you could have had 50 or 36 I mean granted it's just one more but on the other hand we're counting in thousands right you'd rather have the 36 here you're settling for 25 when you could have had 30 or 36 okay so low is a dominated it strategy in this case it is dominated by both of the other strategies but all it would have to do is be dominated by one for us to say we're not going to play it okay so that's what we're going to say and look how I'm going to do this I'm just going to cross it off okay I'm doing what I set up I was going to do I'm going to solve the game by Crossing out by eliminating dominated strategies I'm not setting a low price it's never a good idea okay guess what you look at it from the same way never a good idea for you either yeah onlyy yes it doesn't have to be both doesn't have to be both let me just really reemphasize that if it was just medium that was always better than low you could kind of go through the reasoning you would never play low okay medium would always be a better choice so it's like low just isn't relevant okay and from the point of view of pizza B trying to strategize wondering what I'm going to do as Pizza a pizza B is going to say well she's smart enough to know that low is never a good choice I don't have to worry about her playing it okay we're going to take this dominated strategy out of the game for a and we're going to take it out of the game for B and then we're going to iterate iterate just means repeat okay now we've got this smaller 2 by two half solved game we're not paying attention to this row or this column we've eliminated them looking there now in the smaller to by two Gain Is there a dominant strategy medium is a dominant strategy now okay with the dominated strategy removed now medium is always best medium is the dominant strategy for both okay the other way you could think about this is you could think about it purely in terms of dominated strategies okay now in the smaller 2 by two game with the low row and the low column eliminated now in the smaller game medium dominates high high is dominated so we cross it out here for the row player we cross it out here for the column player this is what we are left with okay so this is our prediction and we got here by eliminating dominated strategies by iterated dominance okay okay so so that's pretty good sometimes we can do [Applause] that but I've already set you up for this we're not going to be able to do it all the time and what I'm going to do is I'm going to going to start erasing from the outside in while I'm erasing think about if you have any questions about this three strategy game because if you don't I'm going to go back to um variations on the prisoners dilemas that we were working on last time so Elaine's asking is this sort of a general algorithm okay look for dominant strategies if there aren't dominant strategies look for dominated strategies cross out the dominated strategies and then look at the smaller game repeat the same process yes that is a pattern that you can use it's not a pattern that's going to work every time it'll work for a few games that you can't solve with just dominant strategies but whenever it works it's always going to get you uh valid prediction okay and the logic is pretty transparent if we think somebody has a dominated strategy it's never their best choice and we're trying to outst strategize them we're going to say okay they're as smart as me they're never going to play they're bad Choice strategy and lo and behold that simplifies my problem okay and both of the pizza companies are going through that logic here okay so what we're going to go back to now is um the room cleaning [Applause] story that we had two variations of that I'm going to flash through real fast here and then we're going to go to a new variation so we' got roommate a and roommate B strategies being leave your mess clean your room leave your mess clean your room and the very first set of payoffs we had I'm just copying them right back in from what we had on Tuesday this set of payoffs made the game a prisoners dilemma it made the game a prisoners dilemma I just recall what we're were doing at the end of class on Tuesday because the payoffs fit that prisoners dilemma pattern there is a Temptation payoff which is the highest one it's larger than and the reward payoff that the players could get if they both did the socially optimal thing the reward payer is payoff is higher than the punishment payoff which is what we think that they're going to get and the punishment payoff is higher than the sucker payoff okay so the problem with the prisoners dilemma is these extreme values here when one player does the socially optimal thing and the other one does not okay the other way to say that these extreme players when one player plays their dominant strategy and the other one doesn't when that happens the player that play the dominant strategy gets this tempting payoff they really hit the jackpot and the one that didn't play the dominance strategy gets the sucker payoff okay that is what drives us to end up at this inferior cell where we both play our dominant strategies even though we'd both rather be here okay so still reviewing think I'll just I don't think I'll use color here I think I'll just erase what we did on Tuesday was we thought about what happens when one player's payoffs don't fit the prisoners dilemma story okay we didn't call it iter iterated dominance then but actually that was the process that we were using when we have the column player having a payoff here that is lower than this one okay so what in words what this means is from the column player's point of view if her roommate cleans up she prefers to have cleaned up too okay so she gets a higher payoff in this Row from cleaning up than she does from leaving a m so so this is B with a conscience here in solving this game we still use dominant strategies okay we basically said B doesn't have a dominant strategy but a does a is not going to play her dominated strategy okay so that would be in effect eliminating this row being left with a smaller game for B and in this smaller game with just one row b plays her best response here okay so iterated dominance all right and same unhappy story just changing one roommate's preference wasn't enough okay now I'm going to change them both okay and this really does change the game it changes the game from something that we could solve with iterated dominance to one where we just have to look for Nash equilibrium and uh that's what we're going to do now okay so what I've done now is I've changed both roommates preferences so they both kind of have this conscience happening both of them prefer to clean up okay column player here row player here when their roommate has cleaned up if their roommate's left a mess they they're still they're not that sweet okay for in this column B is left a mess I'm a I'd rather leave a mess too okay I'm not going to be a sucker okay but I don't have that Temptation anymore that's what we've changed here okay in this game this is a happier game okay look at this cell look at the cell the 3 three cell the reward cell okay say we're here if we are here the question to ask is does either player have regrets okay given that b cleaned up is a sorry she cleaned up no is glad she did A's glad to have the three rather than the two and given that a cleaned up B's glad she did too she'd rather have the three than the two here okay when the answer is no that means these strategies this row and this column form a Nash equilibrium okay I've said a couple of times throughout the class that the idea of Nash equilibrium is really the same as the rollback uh example it was actually developed in the context of simultaneous games um and I guess that's why we use Nash's name more in that context it's not wrong to call a roll back equilibrium an ash equilibrium but let me just put the definition of Nash equilibrium up here and you you're going to recognize that it looks pretty much like the roll back one okay Nash equilibrium is a set of Strat enes one for each player sounds like roll back right such that no player can improve their payoff by deviating from the strategy given the other players strategies okay so our strategies are in equilibrium if given what you did I can't do better by changing what I did and given what I did you can't do better by changing what you did okay so to repeat from Tuesday the phrases that should come to your mind that you should associate with mash equilibrium one is this idea that we're playing best responses to the other given what you did my choice was the best response given what I did your choice was the best response our choices are self-reinforcing no nobody has regrets okay so this cell is indeed a Nash equilibrium and how are we finding it okay we're not eliminating dominant strategies we're not eliminated dominated strategies we're not picking dominant strategies because there aren't any in the games what we are doing is we're just looking at each cell and asking ourselves if it's a ash equilibrium or at least we're in the process of doing that okay that process um Dixit and skith call it cell by cell inspection and that's about what it is we're just going to look at each cell and ask ourselves if we find ourselves doing this are we in equilibrium or does one of us have regrets if neither of us have regrets then it's an ash equilibrium given what the role player did the column player couldn't have improved her payoff given what the column player did the role player couldn't have improved her payoff that's an equilibrium okay so that's a happy story now the apartment's clean okay all I had to do is get rid of that Temptation and now if we both clean we're both happy but there's always a butt it's not the only Nash equilibrium okay and this is something that we didn't have to deal with in um sequential in the sequential games we covered okay there's another Nash equilibrium look up here if we're in this cell if you left a mess you're the column player I the row player I am glad I left a mess too because I don't like feeling like a sucker and given that I left a mess you're glad that you did too because you don't like feeling sucker either look at this here's another Nash equilibrium up here okay so this gain has two Nash equilibrium there are two combinations of strategies that are self-reinforcing that are best responses to each other that will leave not both players Without Regrets okay and Game Theory cannot tell us which one of these is more likely to happen Okay Game Theory could tell us that this is not an equilibrium okay if this happens to me I'm the role player I'm going to change I don't like this outcome if this happens to you you're the column player you're going to change you don't like feeling like a sucker either either of these patterns can be self-reinforcing and we to arrive at a prediction of at which one would happen we're going have to go outside of Game Theory okay we're going to have to think about what the players expect of each other we're going to have to think about what they've done in the past what kind of culture they're like what their history is like so this point where we have to confront the fact that there could be more than one Nash equilibrium is a g in a game is the point where the kind of abstract math atically oriented Game Theory really hooks up with historical anthropological kinds of knowledge that let us know in a particular context which one of the two equilibria can be more Salient in these situations where there's more than one equilibrium we think that things like communication and leadership are very important okay um after class was it um was it Lillian was talking to me about the prisoners dilemma scenario and she was asking she was bothered by the right you were bothered by I just couldn't find you for a minute by this extreme inefficiency here okay and made the very good observation that if these guys are roommates couldn't they just like explain we'd both be happier if we'd have a clean apartment that kind of explanation would be much more likely to work and to be reinforcing if it was in a situation where we weren't being confronted with the Temptation here okay so games like this that are sort of like the prisoners dilemma except that there's this other equilibrium are games where as I said before communication and Leadership can make a huge difference in taking us from a bad equilibrium to a good equilibrium this kind of game is so important it's relationship to the prisoners dilemma is so important that it has it actually has more than one name most often this game is called an assurance game okay U I guess the idea is that if you're assured that your partner is going to be playing the strategy that goes with a good equilibrium you want to do it too um another term that you will hear is stag hunt which I think is um from the writings of rouso some kind of great Enlightenment thinker even before gain theory was invented put his finger on this problem that in this case if we're hunting a stag um there's lots of incentives to all go off our own way and try and get the Stag for ourselves but guess what it's probably not going to work out it takes a bunch of people to bring down a stag we're all better off if we work together okay so stag Hunt is a another name for uh for this game okay okay so some of the most interesting applications of Game Theory to um historical events have involved looking at this assurance game um I just said a minute ago that if you're in in this game that having a leader someone who will stand up and say hey everybody we expect to be here this is the equilibrium we all want to be at let's do it we'll all be better off those words will be meaningful in the context when if we all get there we'll be glad we're there okay it'll sound more like empty rhetoric if we think oh when we all get there we're all going to be looking for some way to cheat and it's probably going to fall apart right away okay there's a famous book um about the Civil Rights Movement by Dennis Chong it's called um Collective action in the civil rights movement and it goes through kind of the history of the Civil Rights Movement as it developed in the South and he especially focuses on the role of um black leaders in the churches and basically what he says they did they weren't thinking in terms of an insurance game they were just thinking in terms of of solving a problem of getting people to participate in marches and sit in and things like that even when there was a pretty strong incentive to stay home the incentives were pretty strong in those days you could get beaten up jailed even lynched for um for participating but what they were able to do was to turn some of these protest events into something that if everybody you knew all the people in your community were part of the protest you wanted to be there too you wanted to be part of it people would notice you were missing people would think well of you if you were there they would think you're a che Peter if you weren't there or that you weren't you know part of this historic movement and that is I think maybe what Lilian was thinking of with the roommate that's a case of leaders first changing preferences okay first changing preferences so we have the Assurance game not the prisoners dilemma and then second making sure that everybody knows that this is the equilibrium we want to be at okay that everybody understands this and leaders in all sorts of cases in Congress in companies in Academia are people who are able to let the followers know that they're in this situation hey followers if we're if everybody else is participating in this good thing happening you want to be there too you want the three if you're the one who stays home and does what seems to be the thing in your own self-interest you're going to be sorry okay that that kind of uh communication is really really helpful and at the same time I don't know if this will ring true to you a lot of the people I know who are good leaders are um embarrassingly optimistic you know really it's G to work it's GNA happen you know really raah raah raah um how can they be that way that raah ra rhetoric is telling everybody this is the equilibrium we're going to be at Okay so so it's it can be really really very useful um an example from my life where that hasn't worked out so well was last year there was a fundraiser at my kids school and I'm sure you guys remember these kind of things of uh whether you went to public or private schools schools are always raising funds and they're always trying to get as much participation as possible and this is an example of what I would consider bad leadership bad leadership strategy the mom who was running some kind of silent auction night tried to use the guilt strategy look nobody's helping me with this the whole thing's not going to work okay and it didn't work okay what tends to help is look everybody's participating don't want you want to be at the silent auction we're all going to do it her rhetoric of hey I really really need your help we've only sold five tickets the whole thing's not going to happen what that did was that let everybody know we're at this equilibrium okay we could be at the good equilibrium but we're actually at the equilibrium where nobody's going to show up okay so um you guys UCLA students many of you are current and Future Leaders maybe that's something to think about there maybe I'm telling you something you already know okay what okay I want to say one more thing about this issue of multiple equilibria in one game and what I want want to do is I want to distinguish multiple equilibrium in a single game from the kind of thing we were doing before the midterm where we would have variable payoffs and there would be more than one case of equilibria in the set of games that were represented with a variable payoff okay so let me what I'm going to do now is I'm going to make this pay off a variable we'll call it t for Temptation okay so now we're in the world of variables and payoffs and in thinking about what's going to happen we're probably going to have to have cases right okay so in asking ourselves if we start down the path of looking for dominant strategies we ask ourselves now in this game where T can be any number number T can be a variable we ask ourselves is there a dominant strategy well the answer is going to be it depends on how Big T is and there's going to be two cases okay so either from the role players excuse me from the role players perspective or the column player's perspective what matters is whether T is bigger than or less than three we're going to continue to ignore the knife edge cases here okay so one case is T is greater than three T is greater than three is there dominant strategy yes yes okay T is greater than three the game's a prisoners dilemma right T is greater than three I'd rather I'm a I'd rather leave the mess when you'd clean up and I'd rather leave the mass when you leave the mass in this case the only Nash equilibrium is mass mass okay second case T less than three dominant strategy now no right this is the case we were just looking at if T is less than three it's the Assurance game if T is less than three then what I want to do depends on what you're going to do okay if T is less than three and you clean up I'd rather clean up too okay but if you leave a mess I'd rather leave a mess in this case there are two Nash equilibria for T less than three both clean as an equilibrium and both leave a mess is an equilibrium and mess mess I picked three because that's the comparison we're making and actually let me highlight that since that's really the important thing that you need to be able to do okay in asking myself whether there was a dominant strategy okay I know that one is greater than zero okay no worries here no variables so I know that if we're in this column I want to be in the mess Row the question is if we're in the clean column okay then the question here is whether T is greater than or equal to three or is greater than three or less than three okay when T is greater than three then mess is the better choice no matter what the roommate does okay so just as in sequential games the the cut points the values of the variable divided into cases we get that from solving the game okay we got it here by comparing the role players payoff from what she gets from leaving a mess when her roommate cleans to what she gets from cleaning when her roommate cleans okay and and figuring out which one was higher three was the critical value the point I wanted to emphasize here though is that this is what we mean when we say multiple equilibrium case two is a case where there's more than one Nash equilibrium okay this is a game where we would say there's a single Nash equilibrium sometimes people use the word unique here so multiple equilibria is not the same thing as the kind of cases that we get when there's a family of games that have different equilibria depending on different parameter values all of those sequential games where we had multiple cases three cases I can't remember if we actually did one with four cases but however many cases you have in those kind of sequential games in any given case once we know the values of the variable there was only one roll back here we have something weirder Happening Here we have yes what happens depends on the values of the variable but here even for a very specific value of the variable the value I had two for example there's more than one equilibrium okay so that's stranger and it can be a little bit less satisfying I mean so far yes we've had to make a lot of assumptions we've had to simplify our scenarios a lot but we've got ve we've been getting very clear predictions about what we think people are going to do here no not in the case where there's more than one Ash equilibrium okay we can rule out some things that we don't think are equilibria but Game Theory alone can't tell us which one is more focal okay I I'll even put that word up okay when there's multiple equilibria and we ask which equilibrium will occur well I said the answer doesn't come from Game Theory the answer comes from outside Game Theory but Game Theory at least has a name for it we just say sometimes one or another is focal okay whatever one people think will happen will happen things that go into making one equilibrium or another one focal are what's happened before what the players think is normal what happens in similar situations that sort of drives you to think about the culture that the players come from um but these are all things that are outside the world of strategy okay things that establish focality sometimes you might look at this game and say doesn't the fact that this equilibrium is better for both players by itself make it focal you kind of think that it should right and sometimes that's true so I've sort of said a couple of times the argument that both cleaning is better than both players you think might actually be persuasive here but it's you could think of cases where one equilibrium is Paro dominated by the other one and yet this one persists the most famous case is one is uh called the hockey helmets case and um you know these days hockey players all wear helmets but at the time people were sorting this out um professional hockey players and most amateurs too didn't like to wear helmets okay it hurts your vision you really see much better if you don't have that big thing around your head and people were pretty much so they were all at this equilibrium where nobody was wearing helmets and a lot of very serious head injuries happening in hockey you know very fast sport hard pucks okay eventually the leagues uh both professional and amateur created rules to force people to wear helmets okay so that now this this is the focally coola broom and I don't think players really want to go back to it uh they're hockey players are tough human beings but I think they're willing to mostly wear those helmets fact is though for years and years hockey was played Without helmets there they really were at the equilibrium that was Paro dominated okay um I've heard people make a similar argument about um drug testing these days in sports like in the tour to France and other uh Sports where maybe all the players would really like to be at the um the equilibrium where uh nobody's doping um but maybe not okay uh the only sport I in that category that I follow is cycling and my sense that is that cycling the problem is that it really is a prisoners dilemma that those guys really do want to win so much that they'll happily take anything their trainer tells them to okay one last game to look at um and it's going to sort of tie up the discussion of focal points uh seems like a good game to uh work in on Valentine's Day is the Battle of the sexas game okay so this is another classic game these games that have names uh usually achieve that status by uh being metaphors for or situations that occur over and over again so what's Battle of the Sexes Battle of the Sexes the players are a guy and a girl okay so him and her in this game we're going to let her control the rows and him is going to get to choose the columns and both of these guys are making a choice simultaneously okay they're not allowed to to talk to each other and no one can move first about where they're going to go and there are two choices they can go to the ballet or they can go to the ball game okay um they can and um getting sort of srting the edges of being Politically Incorrect here but uh well honestly I'd rather I'm not a huge fan of ballet but ball games please um I'd rather go to the ballet and um yeah I think my my husband's not a particularly macho guy I don't think he really likes ball games all that much but he sure doesn't want to go to the ballet and I think there's a a a pattern there uh the girl would like to go to the ballet the guy would like to go to the ball game but they like each other they'd really like to be together okay so here are the payoffs if we both end up at the ballet from the girls point of view I like that that's my high payoff of two the guy is like okay I'm not really into this but I like sitting with her she's happy this is good he's liking that um from the other point of view if they both end up at the ball game the guy's like oh yeah I like this and the girl is um well I still think he's cute I've got my iPod I'm going to be okay I'm getting one here um so those are the on diagonal things those are what have they want to be together this is when people talk about coordination games this is the the story that they have in mind the Battle of the Sexes okay but what else can happen Okay well one thing that can happen is um I go to the ballet I'm the girl and the guy goes to the Ball Game and you know I'm enjoying the ballet that's pretty good he likes the ball game it's kind of lonely but we both had interesting afternoons the worst thing is I'm the girl I go to the ball game I'm here by myself it's so boring I'm getting a sunburn I hate it and let's just not even talk about how the guy feels with the ballet this is so not him um those are really bad payoffs okay so yeah that's um that's the Battle of the Sexes story here um so this is a game where I think it's probably pretty easy to see there are two Nash equilibrium here's one Nash equilibrium how do we see it okay we see it because I'm the girl I'm the role player given that he's at the ballet You're darn toting I'm glad I'm there if I'm the guy the column player given that she's at the ballet it turns out I care more about her than I care about sports wow how's that so this is one Nash equilibrium but so is this okay again I set it up to have this kind of symmetry from the girl's point of view given that he went to the ball game I like him so I'm glad I'm at the ball game too and from his point of view no question I'm at the ball game she's at the ball game no regrets okay what is interesting about the Battle of the sex is why this is probably the um man I want say I was going to say this is the game that's used the most in political science that's not quite true prisoners of and insurance are used a lot too but this is a very political game okay because this game there are two Nash equilibria and the in contrast to a Insurance well we both liked one better here each player likes one equilibrium better than the other okay so there's two Nash equilibria and each player prefers a different one okay so one of the most common um applications of this battle of the sex Battle of the sex's logic is primary season where we're at right now choosing a Presidential nominee okay uh if this is the Democrats Clinton versus Obama we' Some people prefer Clinton Some people prefer Obama but Democrats in general would prefer that we would all coordinate sooner rather than later that's not happening so well it's happening better on the Republican side side right now even though interestingly I think the divisions on the Republican side are stronger that um the people who liked Romney in particular really aren't very happy about McCain but there's if you've been following the op EDS since super Tuesday one high-profile conservative after another has kind of said you know if McCain's gonna be the nominee okay so say I'm William Crystal and I'm in the situation of the girl who seeing that the guy's going to go to the ball game okay the nominee is going to be somebody I don't want but I'm still rather be there than have a divided party go into the general election at a disadvantage okay so yeah I really like this game because of that feature okay Game Theory by itself is not going to tell us which equilibrium we're going to be at okay um maybe that's a little too strong this game by itself is is not going to tell us which equilibrium we're going to be at if you think about strategy during the primary season there's a lot of strategizing about trying to control expectations a lot of the rhetoric about bandwagoning and momentum and things like that what people who are trying to what people are trying to do by I say asserting that their candidate has momentum is to make their preferred outcome become the more focal one okay and they're both trying to do it so uh that's the that's the fun part of it okay so we're good we covered a lot of ground today um on Tuesday actually I think the first time I got to the end of my uh uh outline on Tuesday we're going to start mix strategies okay so I'm not remembering right now what chapter that is in Dixit and ski but it's something that if you have time over the weekend since you don't have a homework set I would look at the chapter on mixed strategies okay
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_15_UCLA.txt
okay so today we are going to start our last main topic in polyi 30 um it may be the last topic of any size uh depending on how things go uh it will be the last topic that you will be responsible for on the exam um if we're running ahead 10 week I will probably spend some of our last lecture talking about strategic voting um some years we get that far some years we don't okay but uh I am going to spend the next at least the next uh three maybe the next four uh class periods uh talking about repeated games okay so what I'm going to do is I'm going to pick up a thread that we started talk about the day we talked about the prisoners dilemma okay um we went through the prisoners dilemma observed what a sad story it is indeed when people acting in their own self-interest can arrive at uh Collective outcome that's uh worse for all of them and then in talking about whether people behave that way in real life one the one of the strongest objections or counterarguments that was made to the idea of you know prisoners dilemma outcomes actually occurring was that people don't just think about the current game that they're in that if you're one of those prisoners and you're thinking about whether you want to um defect or cooperate whether you want to give evidence against your buddy or stay silent you might not just be thinking about the incentives that the da is giving you but also about incentives farther down the line um the possibility of your partner's brother beating you up killing you um various possibilities along that line were floated and I said then you know hold on to that that's what we're going to talk about at the end of the course and here we are that's what we're going to talk about today we're going to talk today about repeated games okay so let me say right off the bat what I mean by repeated game because we've already talked about sequential games where time is passing first one thing happens then another thing happens and even though we didn't get into really long game trees with lots and lots of sequential moves uh there are some in the book and you can imagine those things happening what makes a game a repeated game is that there are payoffs and then there's further strategic moves okay so if we have a game that just has a lot of moves to get to the payoff that's still what we would call a single shot game a repeated game is a game where we we choose our strategies the outcome ensues we get our payoffs and then we interact again okay so that's what makes it repeated that's worth putting on the board a distinction between what is properly called a stage game okay or a single shot game everything we've done so far has been a single shot game once we get our payoffs we're done thinking about it okay this is in contrast to a repeated game okay a repeated game we get our payoffs we really do get payoffs but then there are another there are another set of choices to make another set of strategies to make and what this allows is for the possibility of I get my payoffs I see what you did and I can condition how I'm interacting with you in our second round of interaction I can use how you acted in the past to figure out what I'm going to how I'm going to act in the present okay and the same logic goes forward as well when I'm deciding how I'm going to interact with you in the present I'm going to think about are you going to retaliate against me in the past okay I'll get some payoffs right now from betraying you but are you going to have an opportunity to hurt me later that's what we're going to think about okay the overall narrative here where the game we're going to focus on when we are doing repeated games the stage game that we're going to pay attention to because there's a lot to look at here our stage game is going to be the prisoners dilemma that's not true in game theory in general you can repeat any games and there are interesting repeated games that have different stage games but the game that we're going to think about repeating is the prisoners dilemma and the question we're going to be asking is whether the shadow of the future this is a a cliche I didn't make it up but it's an evocative cliche whether the idea that you will reward me in the future or retaliate against me in the future can that change my behavior today can the idea that my partner could reward me for cooperation or punish me for defection sometime in the future overcome the dominant strategy I have to defect in the short run okay that's the problem with the prisoners dilemma it's a dominant strategy for me to defect and a dominant strategy for you to defect there's nothing you can do in a single shot stage game to overcome that um that dominant strategy okay but once we start to repeat it then you might wonder whether um we can get cooperation happening in an ash equilibrium okay and the answer is going to be yes sometimes so what we're going to be doing today is I'm going to first cons show you a set of cases where the shadow of the future isn't enough where you might think that the shadow of the future would get players to cooperate in a prison's dilemma but it won't work okay so we're going to first have bad news today then we're going to start looking for good news in a different situation okay so that's the overall narrative so the um the prisoners dama example I'm going to um to start with is going to be an example of teamwork on some kind of project okay you could think about this as projects that you would do together as students at UCLA and I think actually that's the example I'm going to use but it would also work if you thought of it as projects in any kind of workplace okay um the quarter system at UCLA is going to make working together as students uh I think a good example for us here so we're going to think about um Brewing A and breu B here and they are working together over three quarters okay they're working on a group project that will get them a grade fall quarter winter quarter spring quarter and then they're both graduating okay so what we're thinking about is repeated interaction with a very clear time Horizon both of these guys know that at the end of spring quarter they're going elsewhere and that they will never see each other again and that definiteness part of it that they're never going to see each other again that's going to be key to this example um let's say that uh one of them knows very much they want to go to the east coast and the other one is certain to stay in California okay so with that background what I'm going to do now is I'm going to put up the stage game and this is going to be repeated three times okay so since this is ugly jargon let me just emphasize the stage game is whatever game is repeated okay so the stage game here repeated three times each person has a strategy that involves slacking off it's going to have you know the same basic structure as cleaning the apartment all these prisoners dilemmas have a common logic that's why they're all prisoners dilemmas brew and a can slack off or work hard brew and be same story slack off work hard okay if they both work hard it's a good project okay they get payoffs of three if they both slack off it's very bad they slap something together at the last minute they get bad grades neither of them like it okay I'm Bruin a okay if I work hard and you slack off my payoff is a minus two the project is not as good as it would have been if you had helped with it okay and I worked really hard on it I spent a lot of time okay so slacking off is better for me than working hard if you slacked off from your point of view though okay maybe we didn't get an A but we got to be plus and your law school applications have already gone in you just think that's fine you did well in your other classes this is your best outcome okay so standard prisoners dilemma example here uh when the roles are reversed the payoffs are reversed okay so it's a prisoners dilemma as in all prisoners dilemmas I set it up so that both players have a dominant strategy to slack off zero is better than -2 5 is is better than three same story here um touching base with a question I've had in my office do you have to look for dominant strategies in the prisoners dilemma no if you just do cell by cell inspection and look for the Nash equilibria you'll see that this is the only Nash equilibrium okay that this is indeed a Nash equilibrium because both players would rather have zero than -2 this can't be a Nash equilibrium for the same reason that you don't want the negative2 you'd rather have the zero same here and this cell is not a Nash equilibrium because if we end up here both players would prefer to have slacked off okay so single Nash equilibrium in the single shot game okay so let's put the stage game Nash equilibrium is slack off slack off okay but now let's try out this shadow of the future idea okay you've got three quarters to work together okay if you could both work hard this quarter you'd both be better off and in the next quarter you could say yes and if you worked hard last time I'm going to work hard this time and we'll get three again and in in the Spring quarter if you've worked hard both quarters I'm going to work hard in the spring and we'll get three in the Spring quarter too and you could be saying that and your partner could be seeing the same logic and nodding their head that's the idea behind the shadow of the future implementing cooperation okay how's that sound as a strategy sounds good right okay people are naughty Ela question One n that youant um elain is saying is there one Nash equilibrium that tells you the dominant strategy of each player that is true in the prisoners dilemma it's not necessarily true the generalization of the point I was making here is if you're looking for nashy equilibria one thing you can do is you can look to see if players have dominant strategy IES and if they both have dominant strategies that will be the Nash equilibrium okay but you can also forget about dominant strategies you can just look sell by cell to find the Nash equilibrium okay so kind of the the take message from that is you won't you will never go wrong with cell byell inspection and actually for a lot of the 2 by two games that we're focusing on here just looking each cell at a time for the Nash equilibrium is often just as easy as looking for a dominant strategy you won't go wrong where the dominant strategy or iterated dominance becomes a better way to go is if you have a really big payoff Matrix and going through every cell just gets really tedious okay so cell byell inspection will give you the Nash equilibrium okay it won't tell you whether the Nash equilibria are based on dominant strategies but in a lot of cases you don't care about that okay okay so um how many people think that this scheme would work work the let's cooperate let's decide to cooperate in Fall quarter because it we can make future cooperation contingent on it okay I know you've got an incentive to slack off PA quarter but I can dangle this reward in front of you that yes if you work hard this quarter I'll work hard next quarter and the quarter after that and you can make the same argument to me did you guys go for that anybody see any problems with it you aren't going for it because I said it wasn't going to work and it's not what is the problem with it the Inc to cheat what is the incentive to cheat you know the other person the the um David David says there's an incentive to cheat and I'm going to amplify one thing that you said there's an incentive to cheat in the last quarter okay and that's the problem the problem with having a really clear time Horizon is part of this game doesn't have a shadow of the future if it's fall quarter there is an incentive to cheat but I can balance that incentive by saying you cheat now I'll cheat next time and you'll be sorry you do the right thing now you work hard and I'll do the right thing in the future and you'll be glad okay so in Fall quarter and winter quarter it's looking pretty good Spring quarter is the problem okay and to really elaborate this part of the problem what we're going to do here is we're going to combine the simultaneous part of this repeated game it does have a simultaneous stage game here with what we learned from sequential games what we learned about how to solve strategic situations where time is passing what we learn is to solve it backwards to think about the last thing that can happen and reason your way forward okay so what I'm going to do here is I'm going to make a different kind of table not a Game Matrix but this table is going to have fall winter spring okay and let's see do I really need to have both players strategies in here why not brewing and a and breu and B okay brewing a and brewing and B uh to their regret in this casee took polyi 30 last year and they know in these situations you look forward reason backwards so if breu and be is making this pitch yes let's work hard now because in the future we can reward each other for working hard breu and a is going to be looking ahead to Spring quarter and thinking okay what am I going to do spring quarter so let's start here start in final round and we're going to reason backwards just as with game trees we start at the bottom and reasoned our way up it's spring quarter you are off to Boston I am off to San Diego we're never seeing each other again you are going to work in the it IND industry I am going to be uh family lawyer our paths are never going to cross again it's spring quarter you've cooperated with me perhaps um in the past what am I going to do spring quarter I'm gonna slck off right I'm gonna have senioritis whatever you want to call it you might want to call it a dominant strategy in the Spring quarter there is no shadow of the future okay there is nothing in the last round that is counterbalancing our dominant strategy okay so guess what we're both going to play our dominant strategy okay well so maybe we'll just have senioritis in the spring let's think about winter quarter though now it's winter quarter and breu and B is saying to brew and a work hard this time work hard this time and I promise I'll work hard next time okay and we can get to this good outcome are you going to buy it no that's the problem this final period senioritis lame duck effect that happen in the last round is going to Bubble its way back up okay I'm not going to believe that if I work hard this time you're going to work hard in the spring because I know in the spring you have a dominant strategy to slack off and I know there's nothing counterbalancing that I know that we have no future at that point so why should I believe that you're that you would make good on this problem promise I'm not going to and you're not either okay so this problem bubbles up into winter and it bubbles up into fall too okay we would both slack off in Winter because we would realize that come spring there is nothing to keep us working hard here in the spring we're going to follow our dominant strategy so in the winter the same thing's going to happen Okay so right off the bat here's are depressing piece of news when there is a clear definite end to our interaction the shadow of the future does not support cooperation okay translating this into more the jargon of Game Theory what you'll read in your book and what you may well hear in other classes this point comes up a lot in IR classes is that in the fin finitely repeated prisoners dilemma when finitely repeated when we have sort of a clear endpoint cooperation cannot be sustained as part of a Nash equilibrium like I say that's the jargon version the ordinary language version of that is when we know exactly when we're going to stop interacting with each other we're not going to be able to use a shadow of the future argument to support cooperation so so um a while ago it was actually more than a decade ago when California and most of the other states in the early 90s had a flurry of adopting term limits for their state legislatures really from like 90 to '94 uh that went from being a very unusual thing to being very very common one of the arguments that was made against term limits was precisely this one okay you want want legislators to cooperate with each other if legislators won't cooperate with each other budgets don't get passed government is in gridlock it's a bad situation if legislators won't occasionally make a choice that works against their individual incentives but is good for the group and when you have this clear two terms you're out of the legislature incentive people would argue that that undermined any incentive to cooperate now I don't think it's worked out that way because even with term limits the people who are in the state legislature interact with each other after they leave the state legislature they may run for the other house they may run for other office they may be involved in politics in other ways so while the particular Doom and Gloom story associated with trim limits didn't come true one thing that certainly did happen around that time was that the importance of parties in recruiting candidates um really really increased now there may have been other factors that were um affecting that as well if you get five political scientists in a room you'll get five different theories about why partisan politics in the US is so much more polarized today than it was say 20 years ago but term limiting State Legislative offices where those offices are a very important step in a person's decision to become a career politician versus somebody who's maybe participating in local government for a short time and then leaving politics term limiting at that critical stage does seem to be something that increased partisanship okay so maybe a broader uh uh lesson from that is that the potential for eliminating any incentive that legislators would have to cooperate in the legislature was so scary to everybody that some other institution filled the back filled the vacuum there and the parties as organizations would seem to be uh the candidate there okay so I'm uh if I'm digressing or making it uh more concrete but the bottom line here is the shadow of the future doesn't work if we know when we're going to stop seeing each other okay all right so I've got the word finite up here too and you might be thinking well the number of rounds of repetition always has to be finite we don't live forever we never going to have an infinitely repeated game right that's obviously true okay okay nonetheless the way that we think most productively about repeated games is to think about a potentially infinite repeated interaction okay potentially in the sense that we don't know when our last interaction is going to be we don't know that we'll never see each other again as long as there's any uncertainty about when we're going to stop interacting with each other we could think of that as something that's potentially going on forever but at each stage there's some probability between zero and one some probability that this stage might be our last we'll never know for sure okay as long as there is some potential that we'll see each other again then we're going to think about the game as being what we would call an infinitely repeated game the idea is never that the repetitions actually go on for Infinity but when we say it's an infinite game what we mean is we cannot identify the last period when we're picking our strategies in a picking our actions excuse me in a particular period we don't know if we're going to interact with that player again okay that's going to change things the whole logic here of why the last in the last period nothing would offset the dominant strategy that whole logic depended on us being absolutely certain here that this was the last time we would see each other okay as long as there's any uncertainty about that um maybe you'll move back from Boston maybe I'll change careers um um then as long as there's some possibility that I'll remember if you did a bad thing and also remember if you did a good thing then there is a shadow of the future there is some possibility of future rewards okay remember literally this is the reward payoff in the prisoners dilemma these future rewards can offset the dominant strategy incentive to slack off okay can offset I didn't say will offset so the main thing we're going to be doing when we are studying repeated interaction is to be looking at when the shadow of the future this idea that if I know you've been a good guy in the past that's going to give me an incentive to treat you well in the present and the possibility that you'll remember if I'm a bad guy in the present will keep me from doing that because you'll have an opportunity to punish me for that okay um there's going to be another point to that we'll see the point will emerge here okay oh I the point was sometimes that will be true and sometimes it won't right sometimes the temptation to follow the dominant strategy to slack off will overcome the possibility of future Rewards if I think it's really unlikely that we ever see each other again or the temptation to defect right now is just really really strong I just get a really big payoff out of it then it might be true that even when we're repeating the prisoners dilemma we're still going to end up at this cruddy Paro inferior equilibrium but sometimes the shadow of the future will be enough sometimes concern with having a reputation for being a cooperator can be enough to support cooperation that idea of reputation I think is important enough and I'm going to just put it up here okay when we have that indefinite time Horizon where we don't know that we'll never see each other again then our reputations matter and what I precisely mean by a rep reputation in game theory is how I've acted did in previous rounds of the game that whole thing is this is the black pen I'm trying to avoid we're g to hide it over here for some other Professor hopefully not we're going to use green okay reputation means how a player has behaved what choices did they make in previous rounds okay in previous iterations of the stage game okay in repeated games players have a history okay and that concern with having a history having a good reputation will affect their strategic choices all right okay so now we need to do a little um a little digression here okay the big question we're interested in is whether a reward in the future for going against my short-term incentives is going to be enough okay how big of uh reward from some future interaction that might not even happen how big does that potential reward have to be to get me to go against my dominant strategy now in order to answer that question we're going to have to compare payoffs across time okay and that's that things get complicated there what we're going to have to deal with is guess I'm just going to say it as a fact the the widespread phenomena maybe that's how I want to say it that people don't like to wait for things people like to get their good payoffs sooner okay um people are impatient and I think that's very widespread one thing I could do is just assert it's part of human nature and um some of you might agree with me just on the basis of that another point I might make is um are people impatient do they always want their stuff sooner well anyone can get run over by a bus you know things things can always happen you're never sure you're going to be around for the future okay that promise of yes it'll be so great I'll reward you in the future we'll both cooperate we'll get that reward payoff all the time you just can't be sure okay nobody lives forever we never know when our number is up okay so the general problem here is that I say benefits payoffs are worth less if you have to wait okay actually in my mind the real source of impatience in people is this idea of uncertainty that I do think impatience is part of human nature but I think it's part of human nature because of this fact that we uh we live finite lives that we may not uh may not be around for future iterations of this potentially potentially infinite game um more concrete aspects of social life can reinforce this is would be if your payoffs are money and you get your money sooner you can invest it and end up with more money at the end than if you have to wait um if your payoffs are not money but there's something that is worthwhile to you in your current situation your life could change in a way that would make the benefit not worth as much um so for example if you're a politician and your incentive to cooperate with another politician depends on campaign contributions endorsements um good press good relations with party leadership all of those kinds of things are worth it to you if you stay in politics they're not worth so much if you go out of politics okay so another and the farther in the future you have to wait for your benefits the more likely that your life will change in a way that's something that you care about now will be something you don't care about in the future okay so the way we deal with this problem of impatience this need of comparing payoffs across time I'm going to clear up my elaboration space here what we have to do is we have to account for what's called time preference or is or in the sense of I'm just giving another name to the same thing or impatience okay I think I'm going to switch for this part of the um lecture today to thinking about the benefits as being money okay a dollar today is worth more than a promise that you're going to get a dollar in a year okay that's that's what we need to account for and with a money example it could be worth more today precisely because of the interest thing but all of those other sources of time preference also matter you not might not be around in a year in a year you might have become a Buddhist monk and you don't care about money there's all sorts of reasons why you'd rather have your dollar right now okay so the way we account for this okay the way we put this into game theory is as with calculating expected value we handle time preference in a very standard way the way it's handled in economics in business in public public policy um in any domain where we have to compare payoffs across time the way we do it is with either a discount factor or a discount rate and I'm going to talk about discount factors mostly I'll say a little bit in a few minutes about discount rate they're really just different ways of looking at the same thing okay a discount Factor so what what can I say about a discount Factor okay so first of all in in political science we think about discount factors as mostly being part of your preferences okay it just says something maybe they induced preferences it's something that reflects your environment and your overall incentive maybe it just reflects something purely about you but it's a characteristic of a player okay it's part of the player's preferences okay discount factors are usually represented with the lowercase Greek letter Delta okay this is a lowercase Delta squiggly little upside down balloon guy here discount factors make sense if they are between zero and one okay and let me sort of write down the equation that defines a discount Factor the present value of let's just do it with one one dollar received one period I'm being vague about the time period is it a day is it a year whatever um that we can just adjust the discount factor to whatever is appropriate is Delta okay if you want to make it more General and say the present value of x it's Delta time x okay so let's look at the fact that Delta is a number between zero and one here as it goes toward one the person is more patient okay value of future benefits is high higher okay Delta is8 that's saying what is the promise of getting a dollar and a year Worth to me well it's worth 80 cents as my discount Factor increases up to 099 I'm becoming a more patient person okay it's still a dollar in a year that we're talking about but now it's worth more to me it's worth 99 since it's keeping most of its value even though I have to wait for it as we go the discount Factor gets close to zero what we're saying is that the person is becoming less patient future less important than the present my discount factor is 02 that's saying that a dollar today is a dollar a year in the future is only worth 20 sense to me today okay I want it now we and in this class and in virtually all normal applications of discount factors are going to think about discount factors that are strictly between zero and one we're not going to think about them being equal to zero or equal to one except as limiting cases so let's think about what that would imply intuitively if I say my discount factor is zero what I'm saying is there is no future if I don't get it now I don't care about it okay discount Factor equals zero means I just care about the present I never think about the Future No Matter How big if you're offering me a million dollars tomorrow that ain't worth nothing to me now because my discount factor is zero I I live in the present okay um you know people say things like that sometimes um I usually think it's empty rhetoric okay I believe that people can be really impatient sometimes I certainly believe the discount factors can get really low and that people can put not very much weight on the future but absolutely no weight sorry tell me another one all right same story though at The Other Extreme if I see my discount factor is equal to one I'm saying I have no time preference at all you can make me wait a million years years for my dollar and it's worth a dollar to me today okay that also sounds truly implausible um I just as I can believe discount factors getting very close to zero I can imagine them getting very close to one um I don't know about 0. N9 but I can I can imagine people caring a lot about the future okay but not to the point where the future is as important as the present okay so what we're going to do is we're going to be using these discount factors in our repeated game to allow us to compare things that we get right now okay often our extra benefit from defecting in a prisoner's dilemma with some future benefit okay the possibility that we'll get um a higher payoff because we'll both do the Cooperative thing in the future okay um when we do that we have to take account of the fact that those future benefits that the shadow of the future is discounted discounted because of time preference okay and we account for this with a discount Factor okay all right so I want to see a couple of things about discount factors now one point I want to emphasize is the discount factors we're using in Game Theory they work just like discount factors that you might encounter if you take a finance class if you take economics classes if you take public policy classes discount factors are used a lot in the real world discount factors are sometimes written into law by Congress when Congress is writing a law and giving agencies like the EPA or the FDA or all of these other agencies that set policy in areas that require balancing cost and benefits sometimes Congress will actually say and a discount factor of 0.9 or something like that shall be applied to cost benefit calculations okay um management Consultants or other sets of people that have to use discount factors to recommend one kind of policy versus another kind of policy to the firms that they work for managers have to think about this so in in real life These are numbers that lots and lots of people are working out working with on their um on their computers in their spreadsheets okay so it's a one of the more one of the places where game theoretic reasoning really hooks up with some very uh substantive real world analysis okay that's one point that I want to make about it another point I want to make is I want to get back to this business about discount rates okay so discount factors I'm going to use discount factures when we use them in political science I find them easiest to use because they to me they're the simple way to explain this how much less is something worth because I have to wait for it the discount Factor puts a number on that how much less is that dollar worth by the fact I have to wait for it if my discount factor is 6 that's what it's worth what the dollar today is worth is worth 6 because in many of these concrete applications of discounting the time preference comes from money and from foregone interest okay all those managers in firms and those financial analysts that are using discount factors to recommend one type of investment or another one type of um project over another because their time preference comes so much from interest rates they are likely to use discount rates Okay so discount rate is the fraction by which my pay off off and money in this context increases if I get it sooner okay so the discount Factor draws our attention to things being worth less in the present the discount rate draws our attention to the fact that they're worth more um in the future if we get control of them in the present the discount rate is exactly an interest rate okay so what I want to do is use this board I guess okay I want to belabor this present value calculation both in terms of the discount factor and the discount rate okay so if I'm sitting here at time one and the amount I have is x0 okay but I'm going to get X1 in a year what we would say here is x0 is the present value of X1 okay that's this is just really a equation that's making the same statement that I've been making over and over again if I have to wait when period for X1 what's it worth to me today okay I'm putting a variable on that I'm calling it x0 and what it's worth to me is how much I'll get in the future times the discount Factor okay another way we could frame that though is how much do we have to invest now period zero to get X1 in the future okay discount rates are usually symbolized by R that R for um rate is often used for interest rates as well okay well we can do that same thing okay so if I have x0 now and what I want what I need is X1 in a year I'm still going to have my x0 and if I'm getting interest on it okay if there's some amount that it's increasing by the fact that I'm getting it now rather than later this is the amount okay if it's the interest rate and I want to have $5 in the future the way to find out how much I need now is to put the real interest rate in here and solve for x0 okay okay so the other way we could say this we said x0 is the present value of X1 we can also say that X1 is the future value of x0 okay what I hope you're seeing is that these two equations are saying the same thing in different ways okay one is is thinking in terms of what we want to have in the future and what it's worth today the other you could think of no this one is the one that's thinking in terms of what we're what we want in the future and what we need today this is in terms of what we have today and what that translates into in the future okay so because they say the same things we can actually derive the relationship between the discount factor and the um the discount rate here okay do I want to do that yeah what I'm going to do is I'm just going to divide through this equation by 1 over 1 + r I'll get x0 = X1 / 1+ R and what that is telling me is you know look at these two Delta has to be 1 over 1 + r so if you're given a discount Factor you know the discount rate if you're given a discount rate you know the discount Factor there just different ways of approaching the same idea we have to belabor this because both are used discount factors are used most in political science they are used ubiquitously in international relations which as I've said is the main place this stuff gets applied discount rates are used more often in economic applications they're used more often when the source of the time preference is uh forone interest rates Dixon Keith use the discount rate I'm using the discount factor I say discount Factor seems easier to me you're more likely to encounter it in your other political science classes but you need to be able to um work with both of them and the thing you need to have to be to pay attention to is that when a discount Factor gets high it means the person is more patient as R gets high it means the person is more impatient right the higher the interest rate the more I want my money now okay because the more it's going to increase if I invest it and if the interest rate is low the person is getting less patient it's the opposite with discount factors okay a discount Factor just the way it's set up as the discount Factor approaches one that's saying oh yes the benefits I'm getting in the future are worth almost as much as I was getting them right now I'm a very patient person as it's getting low going toward zero it means yeah those benefits I'm getting in the future are worth very little to me I really don't care about them I'm more impatient and you see that here right R is down here in the denominator of Delta and as R goes up Delta goes down is there oh thank you good okay right with the Bell did you notice that I'm right on my schedule here okay so arets high the person is more impatient let's stick with that and say less impatient double negatives get me every time our gets high the person is less patient more impatient okay all right all right the one final thing I want to say actually let me um let me just ask you guys how many of you have done present values in other classes even like in high school or yeah fewer than I would have thought unless you're just holding back on me okay um okay good good that you uh are learning it here it's a really valuable thing to be able to to think about to be able to think systematically about this problem of time preference it's a part of life that we're always having to deal with uh with time passing and uh discount factors are a good way to do that so the one other thing I have to add here is is how discount factors apply for longer time Horizons that's something I've been kind of sneaking in to you guys saying um if I have to wait a million years uh if I say my discount factors one then I think it's still worth exactly what it is what I is if I got it today I use that as an example of why uh discount factor that was strictly equal to one would be unrealistic but so far I've only talked about having to wait one year one period okay so let's do the the final generalization here okay the present value of x benefits X utility X dollars X payoff X anything T years T trying to be General here so let's not even say years T periods in the future is Delta to the T power * X okay and to see that let's just kind of put in some numbers here let's think about X in two periods Okay so if I have to wait two two years to get my X okay well I'm going to get X in year two what's that promise of getting X in year two Worth to me in year one okay this is the value in year one okay because in year one I'm only going to have to wait one more year till I get it what's that Worth to me right now in year zero Delta times that value in the present so here I am in year zero I want to know what how I should compare this x that I'm going to get in two years to things I'm getting right now how much would I be willing to give up to get them the way I think about it there's kind of a working backwards look forward reasoning backward aspect to this as well I'm going to say well one year from now when I've waited half the amount of the time that I have to wait for my ex I'm only going to have one more year to wait and I know what that value is that's my discount Factor times whatever X is so that's a number what's that number worth that that value that I'm going to get a year from now Worth to me it's Delta times that okay so for 2 years Delta squar x 3 years the same story any number of periods this is how you account for it you raise the discount factor to the power represented by the number of rounds you have to wait for the benefits so again this kind of calculation is like the life of a public policy analyst if you are doing policy analysis for a think tank a government um what you are constantly Ben what you are constantly needing to compare is often costs that you have to pay right now costs upfront of building new schools changing a hiring plan allowing Charter Schools all of those kinds of things and then you're going to have to compare those current costs against benefits that are going to some of which are going to happen in a couple of years some of which you're going to have to wait 10 years for others that might have longer time Horizons you have to balance these um these different time Horizons and the way people do it is with discount factors okay what real people in the real world do is they are solving problems that sound like Poly High 30 homework problems how big would your discount Factor have to be to make one project a good idea or a bad idea so we don't in many cases we don't actually know what our discount Factor should be but we could figure out how high it has to be to change a decision so that kind of looking for a cut point on this side the present value is too low on that side of some critical value of the discount Factor um it's too hot okay so questions on discounting yes El are you much you today you're to get two periods yes that's right that's right okay um if it's X in 7even periods it's Delta to the 7th power time x and if you just kind of think in your head about numbers less than one when you start multiplying them together they can get very small very quickly okay um so once you get seven periods out those benefits for many discount factors are going to be real real small compared with what you've got today okay um does that sound like a reasonable way to represent time preference to you guys pretty reason you guys like whatever at this point um I think it is reasonable it's done a lot as I've said in the real world by professionals who are trying to make decisions that are right for their company that they think are right for society that are good decisions in one way or another psychologists say though it's not actually the way people in their ordinary life make decisions okay if you go into the lab and try to look at how people handle time Preference they are more impatient than what this implies okay I just said in response to ela's question which is what made me think of this that you know if you're multiplying taking 08 to the 7th power you're going to get to a pretty small number there so something that you have to wait for is really really going to be discounted a lot if you have have to wait seven years the reality psychologist say is people are even worse than that that people are much closer than we'd like to think to that only the present matters okay that uh in reality what people do is called hyperbolic discounting um and it involves more impatience or it reflects more impatience than um the discounting formula okay so this is one area really I would say kind of current research in Psychology and it's a place where this discounting logic that's come out of economics and financed and bubbled through Game Theory and political economy has been very helpful in Psychology even though the psychologists tell us it's wrong because it gave the psychologist this very clear Baseline to compare real human behavior against okay so that's a that's kind of an exciting thing all right so I'm going to end today I want to say one thing about this week's notes though which I will post um by tomorrow afternoon this week's notes has two sections that I haven't gone over in class because it's easy stuff okay there are two places I'll I'll shade them I think it's colorful in the notes one thing that is in the notes that I'd really encourage you to work through is just an example of changing play from sequential to simultaneous remember I said we didn't need to do that on Tuesday we didn't there is also at the end of the notes an example of applying discounting just to deciding just to decisions okay deciding whether a project is worth the cost or not worth the cost using discounting so especially if you're new to using these discount factors work through that
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_8_UCLA.txt
thanks sounds like it's not quite the agenda but it would be interesting to hear Michael Daka speculate about super Tuesdays of the past there you you us first the answer is yeah certainly by next Tuesday And if I'm organized before that I'm going to die day let's see to say about that okay um on the study sheet that came around um you'll see that there's some practice problems on there problems IND Dixit and Ski and uh some variations on problems you've already done there's actually quite a bit of uh practice uh available on the study sheet and I will be posting answers for all of those problems on the course website at least by next Tuesday probably earlier um other midterm related announcements for the midterm you will need to bring a blue book okay no notes or books or anything like that on the midterm you can bring a calculator if you want you're not going to need a calculator okay um unless it's you know brings you good luck or something like that uh it's fine to to express your answers as fractions um but no no elaborate calculations needed uh one large Blue Book should be fine some of you will be taking the exam in another room I'm working on that right now that is proving harder to do than it's been in the past um it's not looking likely that you're going to have a lot of space to spread out during the midterm what i' asked for was either a really big room or another room this size so you guys could uh have a little more Elbow Room than you have right now what I'm hearing from the registar is that that's not available um but I may have one or two sections go to another room because I actually think when everybody who is enrolled in the classes in the room there wouldn't be enough seats here so I will when I get that settled with the scheduling people I will email everybody in the class I'll put it on the course website and I'll make an announcement about Tuesday on about it on Tuesday okay I guess one other thing about the midterm looking ahead um I've said this to some of you I I don't remember if I've said it to the whole class or not but it probably Bears repeating um you want to be prepared on the midterm for some time pressure okay I don't I my ideal would be to have a test where uh time wasn't an issue it's hard to do that in the amount of time that we have it used to be really hard when I taught this class in the Monday Wednesday Friday 50 minute uh format even with an hour and 15 minutes in order me to for me to ask you enough questions for you to show me that you've learned what you need to uh time's going to be a little tight on the midterm okay I'm saying that now definitely not to make you nervous I hope what what I'm to do is to prepare you for that one thing you can do especially if in general you're not comfortable working under time pressure is to start practicing okay do some of those um practice problems on the clock make up practice variations from your homework and do it with the clock running um so that you you you practice that kind of focus another thing that I just can't emphasize enough is the midterm and the final exam are exams where you are better off doing something on all of the questions than getting one question perfect and leaving some blank okay so my advice to everybody is to try to work through the whole midterm then go back and check your work if you work and check um and you run out of time you'll be in worse shape than if you work run out of time aren't able to check okay so it's that kind of test where you're better off doing your first cut on everything and then using whatever time you have left to um dot the eyes and cross the te's okay the other thing I'll say about time pressure is it's not so much of an issue on the final exam okay it's always a problem with the midterm just be prepared for it don't freak out about it just do your best everybody's going to be under time pressure and know that on the final exam most people finish the people who are still writing at the end of 3 hours on my final exams I'm convinced Ved would be there if they had 6 hours to do it you know some people just cannot turn in a test early okay so uh that's uh just some thoughts on how to best prepare for the for the midterm okay so what I want to do now is I want to go right back in to the decision tree that I started on at the end of class on Tuesday and that was I was going down memory lane there and telling you about a decision I faced not in my life as a political scientist or a UCLA Professor but in my life as a mom and was I going to allow or forbid my kid to stand up on his sled when he was sledding with his cousins in Colorado and I started to set up this decision as a tree now I want to reiterate that the difference between a decision and a game a decision tree and a game tree is just that a decision tree has only one decision maker in it okay you could imagine a variation of the story that would have multiple players strategizing and that wasn't the way it was actually and I think I can uh capture the situation pretty well just as a decision so the nodes are going to be my decisions and nature nodes that represent my uncertainty okay so if I tell my son no don't do that either sit on your sled or you're out of there that's going to be my Baseline payoff okay z z if I allow him to do that that's the risky move that's the move where I don't know what's going to happen okay and just as we do in games and decisions we represent places where we don't know what's going to happen with with a nature node yes I did that same thing last time didn't I that's a mistake thank you 111 12 and I'm doing that this is going to be quite a class I'm afraid okay so what's your name Sasha's saying why are there two zeros there should not be two zeros here there are two zeros when there are games and there are two players there is one zero and maybe if I write it in red it'll help me remember it okay only one payoff in a decision tree because there is only one player okay so that's true for me it's true for you in the first problem on your homework this week where you have a decision to model only one payoff there all right so let's hopefully stay off autopilot for a little bit keep the zero there the uncertainty then is about my payoff based on whether if I allow The Reckless Behavior there's a crash or not okay and the numbers that I picked somewhat arbitrarily but also somewhat reflectively to capture my preferences was that a serious crash would be a payoff of -10 but allowing him to do something really fun it's hard to do in Los Angeles would be worth five okay so these are my payoffs as I always do with nature nodes I have to assign probabilities okay in this case I'm going to use a variable to represent the probabilities I'm going to say I don't know what the probability of a crash is okay one way you could think about this is that the actual probability of a crash is going to vary from Kid to Kid okay so it's going to depend on is my son a good athlete is this a really dangerous Hill what's the snow like um those would be sort of the main factors that would determine what specific number P is going to have I'm going to represent it as a variable here because when I represent it as a variable there's a question that I can ask okay and it was the question I left you guys with how high can p p is the probability of a crash be before the optimal decision optimal decision from my my point of view is to allow okay okay so I'm going to solve this right now you solve nature nodes in a decision tree the same way you solve them as a game tree okay you solve decision trees just like game trees from the bottom up you replace nature nodes with the expected value associated with that Choice okay so the expected value to me just going to abbreviate value with a V here of allowing The Reckless Behavior is p the probability that the kid crashes times how low my utility is if he crashes okay plus the the probability that it's just fine no crash happens everybody has a truly hilarious time and the payoff associated with that okay so let's simplify that I get -10 P Distributing the 5 + 5 - 5 P it looks like I get 5 P - 10 that is the expected value here I'm going to just it's 15 right 15 thank you -10 p and so that whole thing is let's just not add stuff here this line I did right right -10 p + 5 - 5 P okay good so I get 5 minus 15 P yeah okay you guys are good you're keeping me on the straight and narrow and this the expected value of allowing the behavior is what goes up here for the nature node okay so let me diligently copy exactly what I'm supposed to here all right okay so this is my expected value of allowing the standing up and then the answer to this question is how high can p be before 5 minus 5 how can how high PB before 5 - 5 P the probability of allowing is greater than zero okay so when this is true right this is the expected value of allowing this is the known value of forbidding when this is true optimal choice is allow okay so we can just solve this I'll take the 15 p over there I get five greater than 15 P it looks like as long as p is less than 13 the right choice is to allow the standing up okay so whenever conditions kid the whole thing are such that the probability of a bad crash is one third or less then I'm okay with allowing him I'm doing the right thing by allowing my son to do this yes Lilian um when p is exactly one3 does that mean that you don't know what to do because Lillian's saying what about when p is exactly equal to one 1/3 when p is exactly equal to 1/3 it means I'm truly indifferent okay the good possibility is ex of just having fun and not crashing is exactly balancing the bad probab possibility of crashing taking into account How likely they are and lilan went on to say is do you just not know what to do and I think that's the right interpretation of being indifferent here another way to say the same thing would be neither choice would be wrong in that case neither choice would be a mistake neither choice would lead me to have systematic regrets okay um yet another thing we could think about there that we are going to start considering um when we get past the midterm is it wouldn't be wrong in that case to flip for me to flip a coin okay just CU I don't care okay in any other case I wouldn't do it I'd really be thinking okay what should I do what shouldn't I do okay all right well I Ed this example um I've bet probably not many people in this class who look like they have kids themselves except for June in the back and his kids aren't quite uh at the St level yet where they're challenging him in this ways but I some of you guys probably relate to this kind of story from the other perspective um your parents have uh had to make these kinds of decisions for you I picked it because it's a good decision at least it's a vivid one in my mind for illustrating the idea of an exposed mistake okay that's something you're asked about in your first question on the homework and I didn't get a chance to talk about that on um on Tuesday uh the question came up on the bulletin board Bo in Flory one of our Tas gave I thought a very good succinct answer to what's an expost mistake it's a mistake That You Don't See okay or that you can't understand that isn't obvious until the game or in this case the decision the game is over okay okay so this really did happen on January 2nd I would say I was in Colorado my son's sledding away and I think the probability as I'm standing there in that Sunny morning in Longmont the probability of him having the crash was substantially less than onethird okay probably more like five out of 100 or something it wasn't that big of a hill my son's pretty coordinated nonetheless he did have a crash okay and that's the point that I want to make here I made the right choice next year we're at that same Hill he's doing the same thing I'll probably make the same Choice again even though he did crash break his shoulder all day in the emergency room it was bad okay but it was not an Exane mistake it was bad that it worked out that way but like I'm a good mom I didn't make a stupid Choice there it was an unlucky Choice okay so you're you're laughing but this happens right sometimes you make the right choice you use all the information available to you you're really thinking okay I'm weighing the pluses and minuses I'm thinking How likely they are I'm conscientious here and I'm making a choice that is I'm going to jaap POS X anti correct okay X anti and X poost are just Latin phrases for before the fact and after the fact before I knew how that day was going to unfold my choice to allow the standing up on the sled was correct okay my expected utility which in lots of ways is incorporating my son's expected utility his cousins my dads everybody involved there my expected utility was higher from that option I got unlucky we all got unlucky that day okay so it was X ATI the right choice to do okay a choice is ex an correct if it is the best choice given the information at the time of the decision okay it could still be an expost mistake okay you could still have that feeling of ah if I'd only known I would have made the other choice I certainly that morning wished that I had forbidden the activity okay whenever you make a choice under uncertainty whenever you make a choice where you're not quite sure what the consequences will be if those if that thing you're uncertain about actually matters okay so that if it works out one way you choose one thing and if it works out another way you choose the other there's always going to be the possibility of an expost mistake okay when you make a choice and you're not sure about what the final consequences are going to be there's always a possibility that you can do the best you can you can choose the highest expected utility and things are still going to work out badly okay you'll never get rid of an expost mistake if you're using decision Theory or Game Theory you're not going to make exan mistakes okay so an exanti mistake in this game given the that P was less than uh one3 would have been to forbid it okay that would have been uh I would have I'm sure been accused by many generations of my family of being way too uptight not making the right decision and they would have been right okay really it was not an unreasonable thing to allow okay all right so two other things I want to say one about expost mistakes and um one about probabilities in general okay so let's see I'm get rid of this stepping back to the big picture again and just what we are representing with uh nature nodes and the probabilities here these probabilities are parameters of the game okay just like when we use variables in the payoff they're parameters of the game so what I'm really representing in this case a decision what I'm really representing here is a whole family of decisions that are of the same structure in this case the same payoffs they just vary um in the value of P okay so a whole set of scenarios that involve a possibility of a crash or not okay what I'm doing here is I'm analyzing a whole bunch of sledding decisions that could occur here for different probabilities okay so saying that the probability of a crash is p let me say that differently if I say that P equals I'm going to pick a different value now if I say that P equals 0.5 I'm saying something about a very specific situation okay a specific kid a specific slope a specific kind of foolish Behavior okay I can use the same tree with the variable to think about a much worse case okay so P equals .95 um now the thing I'm allowing is standing up on the SL juggling an axe while you slow while you slide down or something like that okay much much higher likelihood there that we're going to have a crash and in this case with this probability the optimal decision is to forbid no put the axe back in the car we're not doing that okay this is the optimal X anti correct decision okay this and yeah this is the decision I would make I wouldn't have to draw a tree for that one what's the probability of an exposed mistake in this case any thoughts how many think um the probability of an expost mistake is um zero why you're not going for it okay maybe maybe it's it's reasonable to think that it's zero okay it's not though there there is a what would an exp poost mistake be here it's it's it's much harder to to see in this scenario okay my optimal Choice here is to forbid it okay I could be wrong right the probability that we have a bad accident doing the silly thing with the axe while we're sliding down the hill is 0.95 but there's that other 05 probability that we get down to the hill and the ax is still spinning around nobody's hurt and what a rush that would be okay that would be the expost mistake okay so in this scenario there's still the possibility of an expost mistake but Kyra was was thinking it wouldn't be there I think your intuition is we'd never see it right we wouldn't know okay if I say no you're not doing this risky thing we'll never know whether the expost mistake would have occurred we'll never know whether my son would have been so lucky that he could have done this really outrageously dangerous thing and gotten away from it okay so we still have have the probability of an expost mistake it's there it's is 0.005 but we will never know if it occurred okay this mistake I knew all too well El did you have a question the mistake not necessarily a bad thing but just an unforeseen event yeah it's saying the mistake is the unforeseen event okay it's unforeseen event is maybe a little too much because in either case we are foreseeing the event we're foreseeing that it's possible but we're thinking either that it's too unlikely or too unappealing both those things go into the expected utility how good or bad the thing is and How likely it is it's the the scenario that's not governing our choice that's the one that happens okay so you could always whenever you're making a choice under uncertainty it's always possible that that choice will be wrong sometimes it will be wrong in ways that you will be very aware of and you will have clear expost regrets but there are other cases where you won't be called on your expost mistake but it's still possible that it's there okay yeah the probability could be higher thanil it's not always the um the higher probability that goes with the choice that we make um for example in this case um let's say that the probability is 410 okay so if the probability is 410 of a bad crash it's not less than 1/3 okay it's greater than 1/3 okay um I would forbid him to do that he would I'm sure if he was so uh empowered with decision theory that he could do that he would say but Mom I'm more likely to not have a crash than I am okay the reason why I would still go with a decision that corresponds to the lower probability is because the consequences are so bad okay the question is would always be safer to choose the higher Baseline probability no only when the two events are sort of equally good or equally bad okay and that's that's actually the reason why we go through the expected utility calculation because we don't want to just choose the more the decision that goes with the more likely event we also want to think about is it a really bad consequence here or a really minor consequence so where your intuition would be right is if the difference between uh for forbidding and uh the good outcome here and the bad outcome were the same so if I had a minus5 here that would give me uh 1/2 as my cut point there okay okay another thing I want to emphasize about this this is kind of stepping back for a minute but it's really really important here ah really really important so I'm going to write it over here in my space for really important things whenever you have problems that have probabilities in them and so right now we're having probabilities um to deal with uncertainty um about outcomes we're going to use probabilities for different things in the course remember that probabilities they're not just any number okay one thing about them is that they must be I'll say non- negative because a probability can be zero okay and they must add up to one okay this fact about probabilities that if we're considering all the things that happen the probabilities that attach to those events have to add up to one this helps us a lot okay okay we relied on it here once I said the probability of crashing was P I knew what the probability of not crashing was once I said if the probability of having a crash is 40% I know that the probability of not crashing is 60% okay that that is a important fact about probabilities if your probabilities add up to a number that's less than one it means there's something that can happen that you don't yet have in the tree okay so you need to fix that either by fixing the probability so they do add up to one or adding in the outcome that's missing there if they add up to more than one you've got too much stuff there either too much probability or too many events okay so they have to add up to exactly one so in the cases I think every case that we're going to look at in this class is going to be a case where either one thinking can happen or another thing okay uh it's not that hard to think about multiple possibilities here but we don't need to do it in this class we're just going to think about probabilities either being one way or another and what that means is if we have just two possibilities if we know the possibility of one then we can figure out the possibility of the other yeah can you say an expost mistake is a mistake That You Don't See until the game is over do you mean see in the sense that you don't foresee it or you don't see it happen uh the question is do you an expost mistake is a mistake you don't see till the end of the game is it something you don't foresee or you don't see it happen I would say you don't see it happen Okay so again this going back to my case the fairly low probability of a crash my um expected utility from allowing being higher than it would be from forbidding it wasn't that I didn't foresee the possibility of a mistake okay I definitely was watching and the possibility was flashing before my eyes so I I foresaw that it could happen but I didn't know that it would happen Okay so the expost mistake is the thing that actually does happen that makes you wish You' done the other thing okay wished you'd forbidden the behavior when it started but what makes it just an expost mistake just a mistake that you see after the fact is it wouldn't change the way you'd make the decision if you had to do it over again with what you knew at the time okay so again an expost mistake is unavoidable an ex Ane mistake means you really did something wrong okay means that you really didn't think about what the probabilities were didn't balance the utilities correct ly that you weren't making the optimal Choice okay so an ex anti mistake is one where oh if you had to do it over again you do it differently an expost mistake is if you had to do it over again with the same amount of information you'd allow you'd make the same make the same Choice okay all right so are going to go back now to the rich country poor country Riot no Riot scenario and um do some of the same analysis and some different stuff as well as I'm erasing let me just emphasize again what I'm erasing here was a decision just one person's playoff payoff the reason is that only one person's Choice mattered there all right we have the rich government here that can send Aid or not um now we're back in the world of Game Theory where we have two different governments strategically interacting so we need to have a payoff for each when they when the rich government doesn't send a the poor government doesn't have a choice payoffs are just z z if the rich government does send Aid the poor government gets to decide what to do with it education limousines if the poor government spends it on education that gives us a payoff of 22 that's a nice payoff better for both players than if the rich government doesn't send Aid if the poor government spends it Aid money on limos then Nature has a move and nature decides whether there's a riot or not okay saying that without the jargon of Game Theory if the poor government chooses The Limousines we don't know what the outcome will be okay one possibility is that the poor government will just get the limousines there's not going to be any reaction among the population and the set of payoffs that are associated with that are minus two for the rich government who's humiliated and annoyed with the PO poor government and a path of five for the poor government who's enjoying their limousines but the other possibility is that there really is a riot and if that's the case the rich government's payoff is still the same okay the rich government does gets minus two if its Aid money is squandered regardless of what's Happening um in the street here but now the poor government's payoff is also low from a riot okay okay so yeah if there's a riot we have a low payoff for the poor government okay so having your liines and having a riot is the worst possible thing here for the poor government um worse than not getting the aid in the first place here H spending the aid money on limousines and not having a riot is the best thing in this game from the poor government's point of view okay okay so we did this um on Tuesday I'm not g to rewrite it in I'm just going to summarize on Tuesday we looked at the probability of a riot it being 50% okay it's just as likely that we have one as we don't now as I'm writing up that up there I'm realizing something's left out of my game tree what did I leave out of my game tree the probabilities okay remember when we add a nature node we don't need to add payoffs nature is not getting a payoff here but the nature nodes are solved by expected values which involve probabilities okay so those probabilities tell us um the probabilities give us the ability to make the best use of what we know about the likelihood of a riot or not okay so before we looked at probabilities of0 5.5 either place what I want to do now is change that okay so now let's let the probability of a riot be 0.25 so what's the probability of no Riot 75 yeah yeah yeah okay and let's just do a little example here solve it with that before I do that let's remind ourselves what happened with the probability of the riot on Tuesday a different probability so a different game same name in every aspect of the structure except for the probabilities in that case the um roll back equilibrium was Aid and education if Aid okay so that was a thought that was a nice uh equilibrium it was uh uh nothing else par efficient now switching to Red for the new probabilities I'm going to solve it the same way I'm going to replace the nature node with its expected value for each player okay last time I asserted that um if you plug these numbers into the expected value format you'd get the answer that made sense even though in this case um there isn't really any uncertainty about the rich government's payoff you don't need to calculate an expected value for the rich government but let's say you're under time pressure you're just working will you get the wrong answer no you won't okay if you have I'm going to switch to fractions it's just easier for me to um do the multiplying and you guys are going to watch me like a hawk make sure that I'm doing my uh algebra on the board correctly okay so for the rich government with probability 1/4 my payoff is -2 right plus probability 34s my payoff is -2 I'm going to leave some space here to do the poor government and I'm going to finish solving this -2 over4 plus 6 over 4 sounds like8 over 4 sounds like -2 okay so let's just do -2 - 6 over 4 that is indeed 8 -2 okay so this is just I'm going to put expected in quotes because we don't need to calculate the expected value of the rich players payoff if these numbers were different then we wouldn't have it twice okay okay so for the poor government using the same formula with probability 14 there'll be a riot they'll also get this payoff of -2 with probability three Force there'll be no Riot they'll get a payoff of five okay so again just kind of bringing down line by line simplifying here I'll get -2 + 15 over 4 134 that sounds like three and a qu sound like three and a quarter to you guys I think it does okay so here is the expected payoff of the poor okay just doing my calculations here and bitter detail what I've done now is I've solved the nature node by replacing the nature node with its expected payoffs okay so expected payoffs of -2 and 3 25 okay so I've rolled up this part of the tree now I'm continuing with my backward induction process continuing to solve from the bottom up okay so now if I get to this node and I'm the poor government what's my choice going to be limo okay yeah a riot can happen I'm balancing the fact that a riot can happen and that it gives me a bad payoff if it does but the fact is it's more likely not to happen and my payoff is really good if it doesn't pay it so my ex Ane correct choice if I'm the poor government um strategically speaking not ethically speaking is to take that Aid money and buy the limos okay so now this expected payoff comes up here to be the Strategic equivalent okay so that's the a little bit you know different than what we were doing in games without uncertainty but it doesn't matter whether what we replace this nature node with is expected playoffs from a nature node or an optimal Choice from a decision node whatever we replaced it with just bubbles right back up to the game okay so the Strategic equivalent to the rich government of sending Aid is a payoff of negative -2 okay what's the rich government going to do not send Aid right all right so when the probability of a riot is a little bit lower P equal 0.25 now the roll back equilibrium is no Aid and limo if Aid so in one sense the nature node changes the game but lots lots of things don't change okay the roll back equilibrium still just has a strategy for each player the fact that there's a nature node you wouldn't know that just from looking at the roll back equilibrium okay it doesn't show up in the equilibrium at all okay strategies are the the same here and the idea is even the same okay the idea is that even though neither the rich government nor the poor government know what would really happen if the poor government buys The Limousines The Rich government knows what the poor government thinks about it the rich government can anticipate how the poor government is going to make this Choice even though the poor government is uncertain about the consequences okay so that's why the rich government is able to anticipate the poor government's Choice the rich Government Can Say say that poor government those guys may be corrupt but they can do their expected values they're going to buy the limousines I'm saving my Aid money okay now in a game let me just say one thing I'll get your question in a game we can actually think about an exp poost mistake for both players okay Kyra for this whole game P equal 0.25 so what I'm doing here is I'm contrasting the game that we did on Tuesday that had probabilities of 0 five here and here and I didn't redo that but that was kind of the main thing we did on Tuesday we found the roll back equilibrium there um and this was sort of one way that I was trying to answer your question about punishment okay that with this 50% chance of punishment in the form of of Uncertain consequences here we got this Ru back equilibrium and so today's choice is different and maybe let me use a different color here the only difference here the only difference is the probability of Riot okay so everything else is staying the same okay so with these numbers the red probabilities here today's probabilities what would be the probability of an expost mistake yes you inste I can pick anyone and actually in one second what we're going to do is we're going to ask the the question that I asked about the sledding example okay I could pick any probability I want but rather than just picking and choosing actually I think maybe I'll go ahead and do that and then we'll talk about the expost mistake again okay so what you're thinking is do we have to pick each probability no this is where we can use a variable okay so let me erase the red and I'm going to erase the probabilities we'll let P be a variable and then for all values of P we'll be able to say for these values the equilibrium will be that one for the other values it'll be a different one okay so let's let me erase and we we'll go through that logic right now so here and um the green is going to be how does the equilibrium how does what we predict about what's going to happen in this game depend on P okay we've seen that if we pick one value we get a good equilibrium the rich country sends Aid and the poor G poor country does what the rich country was hoping to do I think it would do with it but if we pick a different value of the equilibrium changes let's think about p is a variable to see how high the probability of a riot has to be in order to keep the poor government from squandering the money on limousines okay so just as I did before once I write down p over here I know what I have to write over there okay if it's this is p the other one is one minus p okay so now when I do my expected payoffs here um I'm going to remember that no matter what p is the um minus the expected value here is going to be minus two it's the same under either scenario okay but I'm going to calculate the expected payoff of the poor government because it really is going to depend on P it's going to be different for different values of P Okay so I'm going to get -2 * p + 5 * 1 minus P okay again you guys are watching me like Hawks on this right simplifying this I get -2 p + 5 - 5 P that sounds like 5 - 7 P everybody okay with that looks pretty good okay so now what's the poor government going to do if it gets Aid the answer is um there's a sometimes you'll hear people say the first law of social science is sometimes it's this way sometimes it's that way and that's the answer here what the poor government's going to do Depends we're going to have to do cases okay we're going to have to think about whether 5 - 7 p is greater than two or less than two okay so going along here and solving the gain bring down this board with P now as a variable we have case one let's say case one is the case where two is greater than 5 - 7 P okay what do we do in this case we're the poor government we do a okay in this case what I get from spending that Aid money on education to is bigger than what I get on expectation on average from spending it on limousines so the optimal choice is education we phrased our question and it sort of naturally it seems natural to think of this in terms of P so let's just rearrange that inequality to make it be an expression about P okay so I'll just just swap the sides of the -7 p and two so I'll get 7 P greater than three as long as p is greater than 37s poor should choose education at we'll call it node two I haven't been numbering my nodes but let's do that right now one and two okay case two is actually let's finish case one that's the way I've normally been doing it if the poor chooses education at node two the rich government can anticipate that the poor is going to choose education at node two what's the rich government going to do send Aid okay right now I'm done okay so for one case um started in green maybe I'll call this the blue okay it's here okay so let case one be the blue case here where this is true okay where this is less than that okay two is greater than this just saying the same thing then the equilibrium path is for the poor to choose education here the rich to choose Aid that's a happy outcome the other case I'll just write the whole thing here in uh red it's just the reverse okay now if two is less than five min - 7 P now the choice is going to be limit right okay it's just the flip side here if this number is less than that one the pores optimal choice is going to be limousines the Strategic equivalent here will be Nega 25 whereas in the blue case it was 22 and the rich government's optimal choice is going to be not limousines at node two not at node one okay so what we've done now is we've solved this game for every possible value of P we'll do the answer here in Black whenever the probability of a riot is less than 37 okay the roll back equilibrium is no Aid and limo if Aid okay for a probability of riots greater than 37 the roll back equilibrium is Aid and education if Aid okay okay so now we didn't have to put P equal 75 in there to answer your question now all we have to do is say okay well 75 is greater than 37 so we know that that high probability of a riot is um going to be enough to get us into what we would think of as usually a good equilibrium okay equilibrium where a is sent and where it's spent in the manner intended yes um how do we know what the Strategic equivalent will be -22 or -25 were those the things you were asking oh thank you thank you no I I I was not seeing what your problem is and you you saw a mistake here okay what what's your name Joyce I'm sorry I'm sometime sometimes asking names and sometimes not and remembering some of what I'm told and not others um doing the best I can here Joyce is looking at this and she's looking at my strategic equivalent here and wondering how I got that okay that's not the Strategic equivalent okay very good that that didn't look right to you what is the Strategic equivalent in this state in this the -2 is right and I think that's maybe why I didn't catch up because thetive -2 was all I needed to know what the rich government should do this is really not the Strategic equivalent the Strategic equivalent here is this it's what we think will happen at this node okay okay so the correct strategic equivalent here is -2 5 - 7p thank you thank you okay so the general thing here when you are calculating a strategic equivalent of a decision node where you've done some expected values already lower down in the game those expected values Bubble Up in the Strategic equivalent okay it's different than if this had been a decision node and the actual payoffs would Bubble Up okay so again let me emphasize here what's going on we have the expected payoffs from the nature node in the Strategic equivalent yes why don't we the branches that are sh off what what's your name Tiff Tiffany's question is why don't we color in the branches that are shooting off of the nature node that's a very good question let's think about what we mean when we color the branches what I'm doing is I'm highlighting the path that the decision maker is going to choose so when I color a branch what I'm doing is I am looking at the payoffs of the person who controls the branch and comparing them okay so I colored blue over here because in the blue case the poor government's payoff is higher nature is never going to make a decision okay so that's one of those things that we do differently with nature nodes than we do with decision nodes because Nature's not making a decision we're just leaving this ambiguous when we replace the nature node with the expected value the expected value has parts of both branch in it that's why we can never say it's going to be this way or the other until the game actually runs and then sometimes we see what happens and sometimes we don't okay actually in this game the way I've got it set up here we would never see what would happen here right either we'd be in the case where the rich government sends Aid and the poor government is so worried about the riots that they spend it on education okay that's one possibility that could happen or we're in the situation where the poor government wouldn't look at it this way they take the risk but the rich government would not okay so here's a case in this game where we'll never find out whether the riots would actually happen or not the riots would always be off the equilibrium path here okay now there could still be expost mistakes we could have those kind of sneaky expost mistakes that we'll never know about like maybe I should let my son juggle the axes well he's sleding maybe when the rich government sends Aid the poor government spends it on education maybe they're paranoid maybe they could have those limousines and there wouldn't be a riot after all they'll never know okay and the probability of the expost mistake in this particular case is the probability that there wouldn't be a riot okay it's one minus P um can I repeat what I s try rewinding my uh script here the trying to remember where we started here okay when the poor government chooses education okay so they choose education because P the probability of the riot is a half that was the case that we were thinking of before they're choosing education we'll never know if there would have been a riot or not but the probability of one was only 50% it means 50% probability there wouldn't have been one okay so there's still a 50% in that case probability of uh an expost mistake okay there's a 50% probability that if the poor government would have known what would happen if it had done limit it would have chosen to do it but it was just so worried that it didn't okay okay so what I've done so far um I've noticed this pattern that whenever I say I have just five minutes left and I have just enough to tell you in five minutes I'm never able to finish what I think I do in five minutes so today I'm not even going to go down that path although that could be an expost mistake right there I'll never know and neither will you today might have been my day um but it's it's actually it's a pretty good stopping place I do have one more topic that I'm going to cover about um uncertainty this is uncertainty about the preferences of uh of a player in the game it's something you'll need to know for the final exam you don't need to know it for the midterm okay so the midterm is going to cover up to this point okay and we'll see you on Tuesday
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_13_UCLA.txt
I am returning to exactly the same form of the cops and robbers game that we worked on on Tuesday I've got too many pages here this looks like what I really need where we have cops they can be on the beat bring the donut shop we have robbers they can be at work they can be at home payoffs 2 and negative 5 from this cell 1 and 0 and this cell 5 and 0 here negative 5 more than 5 okay just so the main thing that we feel it on Tuesday was that this is a game that has no Nash equilibrium in pure strategies no matter what fell we end up in one of the players will have regrets you guys okay this is skiing myself some people creating their heads is it oh thank you good good yeah I like seeing all these furrowed brows and that's that's a problem isn't it this is what the payoff should be it doesn't make sense for the robbers payoff to be low in the case where they're out robbing people and the cops are in the donut shop this is the situation that the robbers like they get to get away with their crime and indeed if we can just think for a minute about my mistake the game I had on the board does have a pure strategy equilibrium right okay robbers have a dominant strategy in this case if the payoffs were really this if we have four robbers get a conscience and decide they really don't like crime whether they get away from that or not then they're always going to stay home and once the robbers play their dominant strategy the cops will have a dominant strategy to be in the donut shop so that game I had on the board there was one that didn't have a Nash equilibrium it had a dominant strategy equilibrium so okay deep breath this is the game right this is the game we had on Tuesday this is the game that makes sense for the cops and robbers story if the cops are on the bee and the robbers are at work the robbers are the ones that have regrets they wish they'd stayed home if the robbers stay home and the cops are on the beat the cops are the ones that have regrets they wish they'd gone to the donut shop no crime happening here the cops are in the donut shop and the robbers are home the robbers have regrets huh we could have gotten some action today and finally the point that you're raised if the robbers are out there rotting and the cops are in the donut shop the cops are gonna have regrets because whoops there's a lot of crime happening we're in the donut shop we don't like that so good now we're back to where we were on Tuesday we've got our game that has no pure strategy Nash equilibrium on Tuesday we did find the mixed strategy Nash equilibrium and what did we find when we look for the mixed strategy Nash equilibrium we found a probability for each player okay a probability distribution over each player's strategies such that given one player's probability distribution the other player didn't regret theirs and vice versa so given the probability that the robbers were at work the cops can't do better than choosing their equilibrium probability and given the cops equilibrium probability the robbers can't do better than getting them okay so the way we wrote the mixed strategy Nash equilibria was that we assign variables to each player's probability we had P be the probability the cops are on the beat and we found that to be one-half all right and we let Q be the probability the robbers are at work that was for xi okay so one part of our interpretation of the mixed strategy Nash equilibrium here is that this is a game where if you are predictable you'll be sorry okay you will be out of equilibrium you will have regrets it's not a self-reinforcing pattern for the players to be predictable okay but in looking for the mixed strategy Nash equilibria what we also found is not any poll probability will do okay that there is only one probability for the cops one probability of the cops being on the beat that would actually make the robbers willing to randomize choose randomly between their two strategies okay if the cops are on the beat with a probability greater than one half then the robbers are not going to be willing to choose at random whether they go to work or not if the cops probability of being on the beat is higher than one half the robbers would do better by staying home okay so if the cops are not on the beat with probability one half the random strategies by both players are not in equilibrium what's different what I'm emphasizing from Tuesday is the cops equilibrium probability having it be exactly the right number is not going to affect their payoffs if the cops pick too high a probability but we leave the robbers probability the same the cops are doing no better no worse okay remember how we found these probabilities we found them we've been on the cops probability because it was the one that made the robbers indifferent between their pure strategies given that the robbers are making the cops indifferent between their pure strategies the cops given the robbers probability are doing just as well by being on the B as being in the donut shop so they're indifferent between being on the beeping and the donut shop flipping a coin that is 50/50 flipping a coin that is weighted way or another okay so the name difference in interpreting mixed strategies is that neither player themselves does worse by picking the wrong probability rather the system won't be in equilibrium okay so there's less of that clear if I was a player who cared about this game I would do the mental work to get the probability right probably not in this game okay probably not in a mixed strategy case okay so that's recap what I want to do now is think a little bit about what else we've learned about this game okay in particular I want to think about outcomes in this game so I'm going to put my equilibrium probabilities here okay so with probability 1/2 the cops are going to be on the beat and with probability 1/2 they'll be in the donut shop using this definition of P if I'm on the beat with probability 1/2 the only other thing I can do is be in the donut shop so that has to be probability 1 minus P with probability Q now I'm the robbers I go to work with probability 411 and so what's the probability that I stay home 7-eleven is very good all right so in equilibrium what we're gonna see is any one of these possible cells okay their strategies are chosen at random okay they don't know in advance what their strategy is going to be but something is indeed going to happen okay with probability for over twenty two we're going to end up in this cell okay where the cops reanima we decide oh yeah okay today we're gonna work the robber is randomly decide yep we've got a lot of energy today too we're going to be here so we're going to see that some fraction of the time what am i doing to get that number I am multiplying the probability associated with a cop strategy and the probability associated with the Roberts strategy now if you're thinking that other classes where you've used probability and you're thinking can she really multiply the probabilities I can and the reason why I can is because these random choices are being made independently okay if there was some other factor that was affecting both the cops choice and the robbers choice if the these two random variables were related then I wouldn't be able to multiply the probabilities but in the mixed strategy story the idea is that they are choosing these probabilities independent of each other okay the mixed strategy choices are independent and random okay independent random and governed by these particular mixing probabilities okay so this probability here is the probability that cops are on the beat I'm writing it out in words just to be very clear times the probability the robbers are at work okay so we're trying to figure out the probability that we end up in any particular outcome and a mixed strategy Nash equilibrium what we do is we just multiply the probabilities associated with the two strategies that create that outcome okay so we can do that for all the cell's the probability that the cops are on the beat and the robbers stay home then is one-half times 7 11 7 20 second seen here 1/2 times 11 and down here the probability that the cops are in the donut shop and the robbers are at work that happens probability for over 20 - take a deep breath ask myself do these probabilities add up to 1 always have to write if you're getting the probabilities right well four plus seven is 11 plus 11 is 22 okay we're fine okay so in the mixed strategy equilibrium we'll see all of these outcomes with some probability and they're not the same okay it's more likely that the robbers stay home then they go to work but it's also possible that they go to work sometimes when the robbers are out there robbing people sometimes they get away with it sometimes they don't okay so if you're asked as you will be in next week's homework to interpret a mixed strategy Nash equilibrium if you're asked okay what's going to happen here so one thing you can say is that you'll see all these possibilities you'll see robbers active sometimes not active other times you'll see the cops working hard sometimes not working hard other times and you can say something about what's relatively more likely it's more likely on if this is happening over and over again we'll see more days when there's no crime because the robbers stay home then we'll see days when there is crime because the robbers are at work okay so the kind of answer I'm looking for I'm going to get your question in just a second the kind of answer I'm looking for when I ask a question what do you expect will happen I'm actually looking for the probabilities associated with all these possible cells yeah there's there's no PS any okay this is um this is let me go over this we did this at the end of class on on Tuesday we can have there's four possibilities right we can have I'll draw a little diagram I drew man this is not a game diagram this is I'm just saying is there a PS any or no is there mixed strategy nash equilibrium or no the game we got up here cops and robbers or possibilities is a game that has no pure strategy nash equilibrium and it does have a mixed strategy Nash equilibrium okay the way we saw that there is no pure strategy Nash equilibrium is we just looked through there's only four possibilities right and so we looked at each cell and we asked ourselves are either of these cells equilibria in the sense that is there any cell we can be in where one player wouldn't wish they did the opposite okay so in this cell given that the cops are on the beat the robbers wish they'd stay home the robbers have regrets given if the robbers stay home in this cell the cops have regrets this payoff is higher this cell given that the cops are in the donut shop the robbers which they've gone out and done some robbing and given that the robbers are out there robbing the cops have regrets if they're in the donut shop okay so it's just finding that there's no pure strategy Nash equilibrium is usually just a process of elimination yeah okay the other things we looked at were games that have no mixed strategy Nash equilibrium but do have a pure strategy Nash equilibrium and we looked at the prisoner's dilemma as an example of that in a little while I'll do an example of a game that has both there are games that will have neither but we're not going to cover them in this any game that we can write down this in this kind of forum where we can list all the strategies for each player will have one of these types of equilibrium okay okay I want to see another thing about actually not part of the outline but it's a an important point to make about what everything I mean by saying that the mixed strategy choices have to be random okay they can't be predictable in any way okay it cannot be the case that the cops are the easy ones to think about since the cop the cops are supposed to be going on the beat with probability 1/2 in order for this pattern of random behavior to be in equilibrium they can't do something though like be on the beat every other day or be on the beat in the morning of the doughnut hunt in the afternoon if there's any predictable pattern the robbers are going to get that okay so the cops have to be truly doing something that cannot be anticipated okay if you think about other contacts that might fit this kind of game income tax auditing is one that's a pretty obvious example of that that the IRS would prefer not to have to audit people's income tax returns it's a big headache everybody hates it it consumes a lot of resources people not everybody not me not you guys I'm sure but there are people out there that prefer to cheat on their taxes then do the right thing okay and that sort of scenario would lead to this kind of mixed strategy Nash equilibrium and the IRS goes to great pains to be truly random in the way they audit they choose which returns to audit okay that randomness can be frustrating sometimes if you're a all business owner of a type that is very unlikely to cheat on your taxes say for all sorts of reasons your record of doing it the kind of business you're in is one where your record-keeping is pretty transparent it's really unlikely if they get you they still my audits you anyway because it's very very important that they are unpredictable okay if they are predictable then the people who want to cheat on their taxes can figure out a way around their strategy okay the same story could be told from the robbers point of view okay the robbers can't have a pattern for when they're choosing which of their four out of every eleven days they're going to be active okay it has to be something like a coin flip like a spinner being spun the way I talked about on Tuesday okay so it's not in the outline but I'm going to put it over here how do I say it mixed strategy this isn't actually the word that people would normally use that doesn't seem like there is a word people would normally use for this mixed strategy implementation must be truly random no predictable patterns okay another thing we can say about and under the subject of interpreting the mixed strategy equilibrium we could talk about the outcomes okay the outcomes now we really have to talk about a probability distribution here if someone asks you what's going to happen in a game where a mixed strategy Nash equilibrium is being played you can't give them a short answer you have to qualify it well this could happen with that probability of cetera okay it's a little bit easier not entirely to talk about payoffs in the mixed strategy Nash equilibria what you talk about now are you got an expected payoffs right okay we don't know what's gonna happen but we can in advance put an expected value on the equilibrium payoff for both the cops and the robbers in this example okay so let's um let's do that on this board here as I often do I use a capital letter u to you for utility to symbolize a payoff I'm gonna say the expected utility of the cops here now we're looking at the expected utility from the mixed strategy Nash equilibrium remember to find Q the mixing probability we were looking at the expected utility the cops got from choosing one strategy versus the other okay where we only had two possibilities to consider if we were looking at the cops expected utility of being on the beat you'll remember that was the argument of the function on Tuesday then we only had two possibilities to consider now we're looking for the expected utility that is associated with annex travel Ibrahim when we have to consider all four possibilities okay that's the only difference it's still the payoff from each of the four cases multiplied by the probability we end up in that case okay so from the cops point of view with probability for over twenty two I could be on the beat and the robbers are at work and that gives me a payoff of - I kind of like catching those guys with probability 720 seconds I can be on the beat and there's no robbers to catch that's kind of a bummer pretty boring wish I had a donut with probability 422 here oh god I'm in the donut shop all hell is breaking loose in my precinct that is a payoff of negative five for me and with probability 720 seconds I'm enjoying my donut the robbers are sitting at home that's my favorite payoff okay so this is the expected payoff of the cops and I won't work through all the mechanics here when I worked through to my office and you guys can verify it I got an expected utility of fifteen eleventh just by crunching through okay same for the robbers okay expected utility for the robbers of the mixed strategy Nash equilibria it's the same probabilities right same four cells we can end up in but now I have to put the robbers payoffs in here because I'm looking at it from the point of view of the robbers okay so for twenty seconds times I'm out doing my thing and the cops get me plus seven twenty seconds times cops are out there but I'm at home my payoff in this so plus another one of those probabilities 720 seconds I'm at home the cops are in the donut shop I don't really care about the cops in this case when I'm at home and probability for 20 seconds I get my high payoff of five here when I'm at work and the cops are in the donut shop the that actually ends up being zero you can actually kind of see how it should be zero here but these two payoffs are the same distance from zero and they occur with the same probability so it just balances there so the expected payoffs here are 15 11 okay a number a little over 1 and 0 1 place where you might use the expected payoffs of a mixed strategy Nash equilibrium is in thinking about whether the equilibrium payoffs are Pareto optimal or not ok so what do you do if you want to ask that question ok or Pareto efficient is this Pareto efficient ok well let's look at that is there some certain outcome that can happen in this game that will make one player better off without making the other one worse off let's just go through systematically since I can't see the 15:11 Sauveur there and I'll forget it in two seconds I'm just gonna copy them over here so I can see them ok so does this outcome Pareto dominate the expected payoffs for the mixed strategy one is one player better off in this outcome and neither player worse off no right you're shaking your head who's worse off the robbers are worse off exactly yeah it's the hard thing about this is seeing that that's not that hard okay so this does not Pareto dominate it okay what about this cell this is pretty dominate it no right cops horse off this one we're not before what I'm doing right now for the comparison no okay what I'm doing and now that's a very good question what your lane says am I using the probabilities I mean blabbering a little bit on your question at all when I'm comparing the outcomes here with the expected payoffs I'm not okay what I'm asking myself is are the expected payoffs what we get from these random strategies and then mixed strategy equilibrium is that outcome Pareto dominated by any of these outcomes is there something that could happen that would be better off that would be better for one player and not worse for the other player in this game okay so what I'm doing here is I am comparing the cops expected payoff to their certain true payoff here I said okay well that's higher that looks good I'm comparing it to the robbers certain payoff I said oh that's lower so this doesn't Pareto dominate it same story here okay the cops are worse off here what do we got here this is a Pareto improvement right okay so what can we conclude here it's an important enough conclusion I'm switching colors here this is Pareto dominated by I'll use words here the arrows are kind of ambiguous what that would mean okay so the mixed strategy Nash equilibrium is Pareto dominated by cops and donut shop robbers at home okay how am I seeing that here we are in this kind of this happy world the robbers are staying home the cops can relax the robbers are no worse off here okay their payoff is no worse than in this risky world okay the cops are much better off okay the cops are getting a very high payoff here and that's I wanna say that's always a feature of mixed strategy Nash equilibria but you might imagine that this cops and robbers game is like indicated with the tax uhm example and you'll get a different example in your homework next week is a metaphor for a lot of monitoring situations one player is trying to get away with something that the other player is trying to monitor them for okay those games often have mixed strategy Nash equilibria and they are often Pareto dominated by one of the outcomes so this game like the prisoner's dilemma has that kind of bothersome quality which is that the only Nash equilibrium the only Nash equilibrium we have is one that's in mixed strategies and it is Pareto dominated by something that is not in equilibrium this cell is not in equilibrium because if the cops are in the donut shop the robbers are going to wish the robbers are gonna have regrets that they weren't out there committing crime okay so that's a worthwhile substantive point but the other point that I'm making here is just process point that when you're asked whether the outcome in a game with a mixed strategy equilibrium is Pareto optimal what you do is you compare the expected payoffs from the mixed strategy equilibrium okay which is going to depend on it's usually going to associate a positive probability with all four outcomes compare that to the certain outcomes in each cell and that's what we find here okay yes no no Rose says can you determine which of the boxes it's going to be in and that is the key and sometimes they're kind of frustrating feature of mixed strategy equilibria you can't tell okay somebody asks you smart UCLA grad this is the situation that's going on in my city I know these are the payoffs what's going to happen and you're gonna say anything can happen okay that's a feature of these situations where the equilibrium strategy for both players is to be unpredictable they can't predict each other's choice and we can't predict their choices with game theory we can predict what pattern of choices what probabilities governing the random choices will be a self-reinforcing pattern but that's all we can do okay that actually reminds me of an another point I wanted to make this point about ex post mistakes we're going to see the same thing that we saw in our analysis of sequential games with nature notes extra board here all right and a mixed strategy Nash equilibrium no player will have ex-ante regrets given the other players choice hey I'm putting this in parentheses because that's just what a Nash equilibrium means the idea of Nash equilibrium every time in any context has this idea of looking at one player's choice holding the other player's choice constant and then flipping the roles okay so given that the robbers are going to work with probability for xi I the cops cannot do any better than being on the beat with probability 1/2 okay it turns out I am NOT doing any better or worse by being on the beat with probability 1/2 and I would be by being on the beat with probability 1/4 or always being on the beat if the robbers are going to work with probability for xi I'm indifferent between my pure strategies but I can't do better ok so no player will have ex-ante regrets ok the flip side of it is though no matter how the random choices okay cops random choice to be on the beat robbers random choice to go to work no matter how those random choices work out one player will and with ex-post regrets okay so given that you're choosing randomly with your mixing strategy I don't regret the fact given that you you robbers out there are going to work with probability for 11s I wasn't wrong I'm the cops to flip that fair coin probability 1/2 to either be in the donut shop or not ok but we could certainly end up in cells like this one where I have some regrets or even this one here I'm regretting being the donut shop and you you robbers you you're out there robbing people and here I regret being on the beat when you're not doing anything there's nothing for me to catch ok so in this game there's always going to be ex-post regrets the way things work out but because they're playing the Nash equilibrium by definition they're not making an ex ante mistake okay that was the way it was in the sequential games where the mistakes had to do with a choice choice in scare quotes there by nature here the the choice is still by nature the choice is true still truly random okay the robbers are picking their strategy randomly so it's a choice by nature being implemented by the robbers but there's still this fact about it that if we end up here okay so this is a cell where the cops have do it in a slightly different language made an ex post mistake here okay crying happened we were in the donut shop ah that's awful but if the choice to be in the donut shop was made with the 50/50 probability the cops weren't wrong they weren't doing something irrational giving their incentives they just were unlucky okay and over here look at it from the robbers point of view okay here's a cell where the robbers have ex-post regrets just using very slightly different language to make the same point if it works out that my spinner I'm the robbers now told me to stay home that day and the cops spinner randomly told them to go to the donut shop I'm gonna think it was a mistake okay but only a next post mistake it's only a mistake after I know the fact of what the cops have done this is only a mistake after the cops know the fact of what the robbers have done before they knew how that was gonna work out they were making their best choice or they weren't weren't making a worse choice so there let me add one qualification over here this is usually I switch colors for qualification this is true for any game with a mixed strategy Nash equilibrium okay this part that there will always be exposed regrets this is only true when there is no pure strategy Nash equilibrium okay if we're in a game with both types of equilibria even we're playing the mixed strategy Nash equilibrium it's possible that we can end up with ex-post regrets or not and that actually seems like a good place to start talking about or start looking at an example of a game that has both types of equilibrium yeah okay so Stephanie is asking is in game theory in general or ex ante regrets ever possible not by strategic players okay that's that's I would say fair answer in life people do make ex ante mistakes and even if you think people making ex ante mistakes is a big part of life game theory can be helpful to identify what those ex ante mistakes are okay but in our games with people playing their nash equilibrium strategies there will never be an ex ante mistake and alright the point i've been emphasizing is that even with these super rational people never making that kind of ex ante mistake here's a situation where somebody's always going to have made an ex post mistake somebody's always going to regret what they did even if it was the best choice they made with information they had at the time they made the decision other questions on cops and robbers before I go to the next game okay do my usual thing of erasing from the outside in in case questions develop well the game I'm gonna look at next is sort of a coordination game it's I guess the disk coordination game it's the chicken game and I can't actually remember whether I know I haven't talked about it yet it's a pretty close relative of battle of the sexes when we get it up you'll see how it's the same in the small way and which is different okay so the most innocuous version of chicken is there's two kids and in there's two kids and they're riding their bikes toward each other and whoever swerves first is the chicken if you were first haha when they just were and I feel great and you're like haha I chickened out it's a bummer okay so the choices and the chicken game are swerve don't swerve just as cops and robbers can be a metaphor for all sorts of monitoring situations battle of the sexes could be a metaphor for lots of coordination decisions for example like nominating a primary using primaries to nominate a presidential candidate chicken could be a metaphor for crisis bargaining situations countries rattling their sabers and trying to look as threatening as possible to get some concessions for their neighbors both of them trying to do the same thing okay so swerve and don't swerve this is the chicken all right so let's get some payoffs in there all right so a player B I don't swerve and you do haha you're a chicken you don't serve and I do yeah I'm a chicken it's true a lot of the time what'll happen is we'll both swerve okay yeah everybody smirks uh-huh but swerve there let's do it again sometimes though what has happened is we're both really tough we don't swerve and that's real bad okay crash we cry as daddy Lane yeah thank you thank you a lane points out I've got my negative sign I'm getting so into my role-playing here of acting out payoffs for you guys I'm not being serious about my my payoffs here okay so in this cell the one who swerves is the row player they get the negative pay off the one who does not work not for evading is what makes you feel good in this game okay both swerving is bad okay so in the way this would be applied in in I our scenario a and B could be Indian Pakistan have in the last decade been playing this game with each other repeatedly and every once in a while everybody gets worried that they get perilously close to this you know outcome are they really going to start a nuclear war in South Asia they are this doing the somewhat more benign example where probably the worst outcome would be a concussion alright so this game does have pure strategy Nash equilibria it's got one right down here okay your player be given that I didn't swerve okay you're glad that you did okay it's embarrassing that I'm calling you a chicken right now you don't like that but it would be way way worse to fall and wreck your bike I'm player a okay given that you swerved I'm really glad that I didn't okay otherwise it would've just been one of those boring do-over is like a tough nananananana that's you so neither player has regrets here and same story up here you can just completely switch the roles given that I swerved you're glad that you did because you'd rather be able to call me a chicken than not given that you didn't swerve I'm glad I did because I'd rather back down and be embarrassed than have us crash okay so this game has to pure strategy Nash equilibria when I said a minute ago that it's a close relative of battle of the sexes what makes it similar to battle of the sexes is that there are two Nash equilibria and one is better for one player the other is better for the other player what's different from battle of the sexes is battle of the sexes the Nash equilibrium happens when they do the same things okay so you're in equilibrium when you truly coordinate chicken if you want to make a fine distinction you could say it's on this coordination game okay we're in equilibrium when we do different things we're not in equilibrium where we do the same thing okay we're either here what were there these two cells are not equilibrium okay in this cell given that you swerved you guys be the role player I wish that I didn't it would have been so fun and given that I swerved you wish that you didn't okay here given that you didn't swerve I wish I had given that I didn't swear you wish you had alright so I'm kind of giving away the punchline here but let's verify whether there is a mixed strategy equilibrium here and I'm going to just go through the same process that I did on Tuesday okay so what am I looking for when I'm looking for a mixed strategy equilibrium I'm looking for the probability that a swerves and the probability that B swerves okay so let Q equal the probability that a this works so now we have to go back to the little recipe that I had in my outline on Tuesday for how to find the equilibrium value of this probability okay it's an equilibrium value if it's the probability that makes be indifferent between swerving and not swerving one players equilibrium probability is the probability that makes the other player indifferent between her pure strategies okay so let's figure that out the expected utility to be a swerve is Q the probability that a swerves times zero okay Q is the probability that a chooses that we're in this row if I choose swerve in that situation I'll get a payoff of 0 plus 1 minus Q the probability that a doesn't swerve my payoff will be negative 1 okay so that reduces to Q minus 1 the expected utility to be of not swerving is now I'm comparing my payoffs in this column using the probabilities that a plays these two strategies okay so not swerving gives me a payoff of 1 with probability Q Billy Q a swerves I get my payoff of 1 probability 1 minus Q of negative 10 making that a little bit neater I get Q I get negative 10 here I get positive 10 Q sounds like 11 Q 11 Q here right 11 Q minus 10 yeah the bells are ringing watch me on this stuff okay so now what I'm gonna do is I'm gonna take these two expressions what I expect to get from this strategy what I expect to get from that strategy as a function of the A's mixing probability and I'm going to find the one the one probability that a can mix with that will actually make me indifferent between my pure strategies and again the reason why I have to be indifferent between my pure strategies is that if this is a higher number if the expected utility from swerving is higher than the expected utility from about swerving well I'm gonna swerve okay and if this is higher I'm gonna not the only way I'm going to be willing to make the choice truly randomly is if what I expect to get from each outcome each certain outcome is exactly the same all right so let's find that Q it's the value for which Q minus 1 equals 11 Q minus 10 okay we'll bring that around here we'll get 9 equals 10 Q Q equals 9 tenths okay so there's half of my mixed strategy Nash equilibrium now I want to draw your attention back to what happened when we went through this process with the prisoner's dilemma okay remember in Tuesday we went through this process with the prisoner's dilemma and we got a probability that was I think negative okay if you get a probability here for a mixed strategy Nash equilibrium that is less than or equal to 0 or greater than or equal to one what that's telling you is that there is no mixed strategy Nash equilibrium it's telling you there is no reasonable probability that will make your opponent indifferent between their pure strategies and of course that makes sense in the prisoner's dilemma when you've got a dominant strategy there's nothing anyone can do to make that strategy not dominant it's always going to be better than the other one okay so when you get an unreasonable answer here yes it's always good to check your algebra I bet you guys are better with algebra than I frequently am check it anyway but if you get a negative number here or a number greater than one it's not necessarily a mistake it's just it's a message it's saying yeah this is a game that doesn't have a mixed strategy equilibrium it's fine okay so that in equilibrium is 9/10 sometimes what people do I do this I think it's a a good practice is Q by itself I'm defining that as the probability that a swerves and if it this little star up here to denote to this equilibrium value okay we started off we said Q is any old Q any old probability but in fact there's only one Q for which this is true okay so we set it up this thing is true for any old Q this is true for any old Q but once we set them equal to each other now we're saying now I'm looking for Q star Q star is the one that will make my expected utility from swerving exactly equal to not so I when I'm going through this kind of analysis I find it helpful to keep straight a variable that is truly a variable from the solution that fits a particular question okay so this is the solution that fits the question of what is the next strategy Nash equilibrium okay so in cops-and-robbers what we did we found q and then we turned around and we found p we said well p in this context would be the role-players probability so I switched q and p here from cops and robbers remember I said that that's okay you can use whatever variable you want for either player just make sure to keep straight which one it is in this game though because the strategies and the payoffs are perfectly symmetric if 9/10 is the probability of a swerving that makes be indifferent guess what 9/10 is going to be the probability of B swerving that will make a indifferent okay you can work it through and it might not be a bad idea to UM work it through when you're recopying your notes or something like that but you will indeed find that the probability the swerves yes well follow me on the rule here let that be the probability and mixed strategy equilibrium is indeed 9/10 okay the reason why I don't need to go through the whole process again for B is if I started to do it okay let's just say I started to do it here and I'm actually I'm gonna leave that up there and I'm gonna do it now for a okay so the expected utility of a of swerving now is going to be a function of Bees probability P okay so now I'm a I'm asking what is my expected payoff if I swerve okay well it's P the probability B swerves times 0 plus 1 minus P the probability of the B doesn't swerve x minus 1 okay see it's going to be exactly the same and for not swerving down here it's going to be P times one plus one minus P times negative ten and I wish I hadn't erased the expected utility of B also wish I'd put the expected utility of a here cuz that's what it should be what I've got here in a in red are A's expected utilities as a function of P and they look exactly like bees utilities as a function of Q I will go ahead and put these expected utility back up here of not swerving it is Q times one plus one minus Q times negative ten so if you look at these two pairs the value of Q that satisfies the blue ones is going to be the same as this it's never wrong to work it out both ways and it honestly doesn't take that one either but it's also fine to look at the game and say that when the strategies and payoffs are completely symmetric the reasoning is going to be completely symmetric to okay since we never get any choices in game theory that come from anything other than the payoffs the equilibrium mixing probabilities have to be the same when the payoffs are the same for similar walls okay okay so what do we think is gonna happen in checking all right so I'm gonna swerve with probability 9/10 you're gonna swerve with probability 9/10 9 times 9 is 81 81 one hundredths I'll write it as a percent 81 percent of the time it's gonna be a do-over okay you have to really like chicken to be getting that many kind of dud results okay but the thrill of it is in that one tenth of the time when I don't swerve and you do and that is nine percent nine times 110 times ten I'm just multiplying the probabilities again nine percent of the time haha you're a chicken and nine percent of the time also I'm gonna feel humiliated ah not a chicken and then there's that horrible one percent of the time when when we crash I think I'm not going to work it out here but not a bad problem some of you guys are coming to me and wishing for more problems as I did before the midterm I'll give you more problems to do out of the book but here's something for you guys to do either over the weekend while you're processing this or later when we review this for the midterm yeah is to evaluate is the mixed strategy Nash equilibrium in chicken Pareto efficient I won't tell you now but I'll try to remember to include that in the answer keys that I put up for the final study sheets and remind me I don't other thoughts on chicken yes Lilya is asking I think what you're asking is do what do we think is focal in this game say what would it be whatever we've got three equilibrium here we've got two pure strategy Nash equilibria and one mixed strategy Nash equilibria and I'm gonna give an answer somewhat similar to the answer I gave when this question arose with assurance can remember assurance at the game that's sort of like the prisoner's dilemma except that when one player cooperates the other one wants to - and I said in some context you might think that the fact the one equilibrium is Preto efficient by itself would make that equilibrium focal sometimes it does but not always and I used the hockey helmets example of one as one where the Pareto inferior example was focal and I think really the reason why people didn't wear those hockey helmets for so long was just because that's how they'd always done it and one way that things become focal is through the force of history because it's based on our shared expectations that same kind of reasoning you can apply to these games with both types of equilibria sometimes it seems like the Nick strategy equilibria are so weird and so random that they shouldn't really be focal okay that we should end up at some one of the part of the pure strategy Nash equilibria sometimes that is true okay not always chicken is not my idea of a particularly good sport but lots of sports if you really think I mean like a sport that somebody plays for fun soccer football where it's one team against another those games if you use game theory on them will only have mixed strategy Nash equilibria and if you think about it it sort of makes sense what's the fun in watching something where you know it's always going to be this way okay so that wouldn't be an example of games where the mixed strategy Nash equilibrium its focal and I actually I think that in chicken it truly when chicken literally the little boys riding their bikes toward each other the mixed strategy Nash equilibrium is what they're playing but otherwise it's no fun yes one of these is focal because I'm well-known to be a maniac and you're a reasonable person and we both know that we're going to end up here what's the fun of playing okay or similarly if the roles are reversed in the crisis bargaining situation if you think about that kind of situation being played between all sorts of neighboring pairs must neighboring countries don't get into these crisis ex escalation and I'm stationing troops on the border you're performing a nuclear test most countries don't do this okay so for most countries you know either one is dominant or the other one is we're not having these contests okay but occasionally you'll have one where it's not clear and the mixed strategy equilibrium is the focal one good question the questions on this yes yes Stephanie is asking when I'm when I'm asking whether the mixed strategy Nash equilibrium is Pareto efficient what am I asking for so let me sort of remind you of that what I'm asking you to do is to calculate both players expected payoffs from the mixed strategy Nash equilibrium where you're going to use the probabilities here to calculate the expected payoffs but then to take those expected payoffs and compare them to the pairs of payoffs okay so you use the probabilities here to sort of revisit a question Elaine asked earlier you do use those probabilities and calculating the expected payoffs then you're done with them okay once they're in the expected payoffs you're comparing the expected payoffs to the certain payoffs that could happen in any of the four cells okay um there is one other set of things I want to say about mixed strategy Nash equilibria and another set of things I really want to say about cops and robbers but I think what I want to do is I'm going to save that forum for Tuesday okay so do your homework if you didn't get the homework on Tuesday it's up on the website and I'll see you next week oh yeah
Political_Science_30_Politics_and_Strategy_UCLA
Political_Science_30_Politics_and_Strategy_Lec_19_UCLA.txt
okay guys um today is going to be a review session as I said the second thing I'm going to do is go through a practice problem Monica emailed me and suggested that that would be helpful I think that would be helpful to a lot of people what will make it particularly helpful is if while I send it around while I'm getting the over the outline up and getting myself organized start setting it up start working on it yourself pretend that you're in the exam see how quickly you can start translating this into a game okay that'll really help us make the best use of this thanks June mm-hmm I think I think I know what you're asking I say a little bit more in a second grab all those for myself are there extras of the problem didn't grab one for myself there must be Oh terrific right here thank you it's great okay I'm seeing enough people looking up and I am though go ahead and start talking I actually if you think your time is best spent cranking away on that problem keep working it's fine um I will be unfounded I will repeat some things right now and say some new things as well about the test format and about logistics okay so one question I had a second ago was Elaine ass is the format of a test going to be like this question yes and specifically it can be like this question in the sense that it'll be a scenario with a set of questions there'll be changes to the scenario with other sets of questions okay so there are going to be four questions like that under the zm4 stories that you will have to translate it into a game say some things about the baseline version of the game make some changes see what happens analyze the new game and interpret your answers okay so that's generally what you're going to be doing what will be different on the final then from the midterm is that you don't need to bring blue bucks okay so the final exam sheets are not going to look like this with all the words put together there's going to be space for you to do your work right on the exam sheet okay yes you find yourself during the exam you've made a mistake you're writing in pen we'll have some extra copies it's also fine if you want to bring your own sheets of paper if you need to attach supplemental sheets that's that's not a problem okay though the idea of giving you the space to do the the exam right on the page is mainly just to simplify things for us okay other logistical point that I want to re-emphasize is the rooms so as I said on Tuesday if you are in atom or Emily's sections you'll take the exam here right I said you guys would be here Junot Florrie would be in the other room and here it says pains 118 if you are in June or flurries sections I'd like you to go to proloff 1102 so we'll have more space okay it's not going to be as crowded as it was during the midterm and not saw that's good for everybody the exam starts at 3 p.m. be there on time ok starts at 3 p.m. whether you're in the room or not and ends at 6 p.m. no books or notes I think I said this before the midterm but I'm not sure and even if I did it probably bears repeating something I would recommend you do late in your studying process is to make yourself a summary sheet okay and the best way to do this I think is to limit yourself to one page one eight and a half by eleven page or whatever it the normal sized page that you put key terms key processes key concepts on okay one point of having that kind of review sheet is whether there's something you could be looking at at 2:55 when you're waiting for me to get here with the exam so for your own review but the real value of that kind of study sheet does not come from looking at it after it's done it comes from you making it you deciding what is important enough to put on the sheet what could be left off you synthesizing things so that all of the techniques all the steps that we've learned this quarter get organized in your own mind okay so don't think it just because I gave you a study sheet with key terms on and off and all of that that you don't need to do something like that on your own okay the process of you doing it yourself is really really valuable okay um other logistical thing is I will have the answers to the final exam study sheet up sometime tomorrow actually if I can get to it today I'll put them up today but I would say tomorrow is the likely the likely time for that the answers to homework sex same story by the end of the day tomorrow I'll have posted the lecture notes for this week the answers for homework 6 and the answers for the study sheet any questions on logistics no we're organized there okay so the the two main things I want to do today I want to go over r-e emphasizing and I hope maybe pull together a little bit some questions that all fall under the topic of multiple equilibria okay I don't think that's going to take too long but I'll say something about this stuff and then I'm going to go over that sample problem okay so first of all let me pick up and reamp a size the point I was making at the end of lecture on Tuesday okay and let me just summarize that point in terms of a basic fact there are many and the repeated prisoner's dilemma okay many many many more than we can count okay we specifically focused on mutual grim trigger which is an equilibrium sometimes and not others let me just emphasize that sometimes and not others sometimes and the answer to win among sometimes mutual grim trigger is an equilibrium is that it depends on the players discount factors okay depending on the particular payoffs in the particular repeated prisoner's dilemma there's going to be some minimal discount factors so minimal level of patience that both players will need to have in order for mutual grim trigger to be in equilibrium okay another thing another equilibria in the repeated prisoner's dilemma is unconditional defection okay this is the equilibrium the only equilibrium in the single-shot prisoner's dilemma okay this is playing the dominant strategy in the stage game is always no sometimes here always a Nash equilibrium there are others that we didn't talk about that you're not going to be responsible for finding on these iam I'll give you one example just to help you grap your brain around the general point about what is and is not an equilibrium here another famous one that you may have encountered in other classes is tit for tat okay tit for tat is a different strategy than grim trigger I'll put it up here it says yeah by cooperating then do whatever your partner did last time okay so there's a different strategy that if both players play it will be a Nash equilibrium sometimes in the grim trigger and sometimes in the repeated prisoner's dilemma excuse me and just as with grim trigger this is also sometimes equilibrium and again it depends on the players discount factors okay something we didn't cover this year but you may have read about it in the book is that the conditions for tit-for-tat to be in equilibrium are more strict than for grim trigger okay if grim trigger is an equilibrium tit for tat might be okay but it's possible for grim trigger to be an equilibrium attempt for tat not to be we didn't cover that it but it would be the same nash equilibrium process yeah another thing to observe about this is that tit for tat if it's being played in equilibrium is going to look just like grim trigger okay if either of these strategies is being played all you're gonna see is people cooperating you're never gonna see what would happen if somebody defected it won't happen in equilibrium so you would never be able to tell the difference among the other many many other strategies that can be equilibria in a repeated prisoner's dilemma there's lots of middle ground between these two what's interesting about tit for tat is tit for tat is a strategy we're off the equilibrium path it's possible to make amends okay with tit for tat say I defect you'll defect against me last time next time but then if I cooperate you'll start cooperating with me again okay so tit for tat is a strategy that has lots of forgiveness all you have to do is cooperate once and you'll get a stack on the equilibrium path okay what makes tit for tat seem more appealing to many people and what makes it a quote unquote better outcome is if we step back from the prisoner's dilemma we're thinking about right now and think about it well more realistic prisoner's dilemma where people can make mistakes okay I meant to cooperate but hoops I didn't understand what cooperate meant to you and I defected okay or just something random happen and you think I defected but really I cooperated those kind of misunderstandings if those occur in a world where people are playing mutual grim trigger it's going to be bad because sooner or later a mistake will happen and we'll never be able to get back to the cooperating phase of the game before what we're playing is tit for tat and we're playing a prisoner's dilemma that allows for mistakes allows for random error then sometimes we will actually see people defecting by accident or the defections will be misinterpretations you can think about either of those things getting us out of cooperation but we'll be able to get on back to the cooperative phase here I mean not gonna drift too far into the interesting facts about tit for tat there are many of them but I think I'm gonna I'm gonna draw the line yeah okay so my general point here the reason why I'm emphasizing this so much is we need to bear in mind that there are multiple equilibria in the repeated prisoner's dilemma because we can never say for sure that people are to be able to cooperate okay even when the discount factors are high high enough to support grim trigger even when those discount factors are above the threshold value it's still possible for another equilibrium to be focal okay we can be playing the repeated prisoner's dilemma we can both be SuperDuper patient patient enough that we could be cooperating but if the fact is that I expect you to unconditionally the fact and you expect me to unconditionally the fact that's going to be equilibrium this is going to be mutually reinforcing if you are playing this strategy I don't do any better by deviating from it okay so when we're solving those repeated prisoner's dilemma problems and we're asking when can cooperation occur we're asking when is it possible we're never able to say appear in to you we're never able to say it's guaranteed because there are multiple equilibria in this game and the stage game equilibrium unconditional defection the Pareto inefficient outcome that is always a possibility okay so it's very important caveat to the two interpreting the happy side a grim trigger okay it's never guaranteed all right two smaller points here multiple equilibrium in the repeated prisoner's dilemma is just the same ideas multiple equilibrium in those single-shot games it means there's more than one pair of strategies that are mutually reinforcing more than one pair of strategies where if you're playing your strategy I don't do better than the strategy I have and vice-versa so in single-shot games just to remind you we looked at a number of single-shot games that had multiple equilibria one was assurance assurance were was the game that was like the prisoner's dilemma in that it had defect defect like equilibrium but this game is different from the prisoner's dilemma in the in this game just played once both cooperating is also an equilibrium okay so that was a single-shot game in which there were two equilibria one of which was Pareto efficient the other of which was Pareto inefficient other games we talked about at different times in the course that had multiple equilibria were battle of the sexes where there were two equilibria and one player preferred one of the equilibria the other preferred the different one okay so in assurance both players prefer the same Nash equilibrium okay if we're given a choice both of us would choose to be at the same equilibrium in battle of the sexes both players or not both players let me say each player prefers a different actual equilibrium okay so in the straightforward battle of the sexes story the girl prefers the equilibrium where they both go to the ballet the guy prefers the equilibrium where they both go to the ball game okay that's what makes a battle of the sexes now don't over interpret what I'm saying about different players preferring different Nash equilibria okay if I'm the girl and I think the ball game equilibrium is focal okay that's what I think is gonna happen I'm gonna go to the ball game yes I preferred that we were in the world where both of us expected to go to the ballet I would like that better but I know that I can't change it I know that the ball game equilibrium is focal okay the other game that's very closely related to battle of the sexes is chicken okay really the chicken has the same feature that each player prefers a different of the two Nash equilibria I prefer the equilibrium where you swerve because I get to be the winner and tease you about it and you prefer the other one because you want to tease me about being a chicken the only difference between chicken and battle of the sexes is that in battle of the sexes the partners want to do the same thing they want to end up at the same place and in chicken they want to do different things and chicken we're out of equilibrium if we both swerve or we both go straight okay last thing I wanted to just repeat is the concept of vocality and focal points okay this is a weird concept because when we start talking about focal points we are going outside the world of game theory okay the idea of saying one of these equilibria is focal what we're saying is that maybe the players do know that they both expect to go to the ball game it's such a beautiful day it's just got to be focal to go to the ball game but we wouldn't know that from game theory okay the game that you write down what you have on your page what you learn in PS 30 is not going to tell you what is focal it's going to get you up to the point where you can figure out what are the potential equilibria whether you need to think about vocality but to know what is focal the two things we turn to outside of game theory outside of the world of mathematical social science is what we know about the history of the players and what we know about their culture okay so if something is a focal point it is the equilibrium I'm just going to raise here to give myself a little more space among several that all players consider likely obvious it's what they expect we don't always have focal points okay you could be in a situation of multiple equilibria where nothing is focal as well but in many cases going outside of game theory in a particular context can help us identify which of several equilibria would be focal all right yes I wouldn't Alain a sir on the exam you're gonna have to talk about locality you guys know my exams well enough now that you know I'm not gonna like say define vocality two points that kind of thing you are gonna need to know the concept okay you're gonna need to know in games that have multiple equilibria you're gonna need to know what you can and cannot predict in those cases okay you'll lose points on the exam yeah in the case of the repeated prisoner's dilemma you're given a set up where the discount factors are high and grim triggers equilibrium you say something like alright it's guaranteed these guys are going to cooperate that's wrong okay you need to understand that it's not guaranteed so it's something you need to know how to apply not so much you need to know how to apply it to understand the limits of what you can say with game theory the other thing that I would be unreasonable for me to ask you to do on the exam is to ask you in a particular question what you would expect to be focal okay if this was a class about elections or crisis bargaining or something like that where we've learned a lot about culture and history of a particular country a particular part of the world a particular phenomenon then it would be fair game to actually ask you to use that kind of knowledge to to talk about what's focal but we haven't done that in this class all right other questions on multiple equilibrium so what I want to do now is turn to that sample problem and start setting it up here okay so I put together this problem because the each of the scenarios and the problem does ask you to change the game in the way that I think most of you guys are most you guys but many of you guys are having problems well okay and I think I want to preface this by reminding you of some things I said really early on when we first started doing game trees might have even been the first week of the course when I talked about the steps and setting up three things that I was really emphasizing three things that when you're reading a scenario and it's kind of all over the place it's got some relevant information mixed in with some irrelevant stop what you're reading at and what you're looking for what you're reading are the strategies the preferences and the outcomes okay the strategies are what can each player do okay what are their choices preferences are what do the players care about what increases their utility what decreases their utility what do they regard is a good thing what do they regard as a bad thing okay we need these two things obviously we need them when we're setting up a game because they're parts of the game the strategies are either the labels on the branches if it's a sequential game or the labels on the rows and columns if it's a simultaneous game and the preferences are captured by the payoffs okay so these are obvious that you can't have a game without these things what's less obvious but equally important are the outcomes okay I'm really I want to remind you guys how important it is to take time to think about the outcomes the outcomes are what links the choices of the players to the things they care about if you really want to be precise I would say the outcomes are the things that the players care about what part of a scenario is an outcome if either player cares about it then it's part of the outcomes the Preferences are the values assigned to those outcomes okay what makes outcomes tricky in many of our questions and certainly in reality is that they often have multiple components hey and that just sounds like what right what a players care about you they care about more than one thing that's what make strategic situations interesting you care about winning the election but you don't like fundraising okay you care about in this scenario blocking the whatever the blocking the was it hazardous waste center that's being blocked here yes the waste treatment facility but you don't want to be the one that does the work of fighting okay so outcomes often have multiple components again how many components are there to the outcome as many as the players care about if either player cares about it's relevant for the game because if either player cares about it it's going to affect their payoffs it's going to affect what strategies they're willing to play in different situations the whole idea behind putting numbers on payoffs using numbers to represent payoffs even though to represent how much people like something even though it's very unrealistic this is what allows us to compare multiple components of a situation ok those utility numbers that we've been using synonymous with payoffs those numbers are what allow us to compare multiple components of the situation it allow us to compare apples to oranges again touching base with some of the problem I started with the very first week of school allow us to balance the good that comes from a choice against the bad we use numbers to represent how people balance competing parts of an outcome balance the good against the bad all right so it's my little speech about how important outcomes are and I'm making the speech because I know for many many of you're saying things to me like the hardest part of doing the doing the homeworks is translating the story into a game it just seems so confusing every week I get people who are just so sure that there's some critical part of the game that's not in the scenario right I know you left it up for mr. blonde no no no it's they just gotta look for it ok and you're right this is the hardest part translating the game into cuts lighting the story in English into a game is one of the hardest parts the other hard part is translating the game outcome into something about the story so translating the world into game theory and translating game theory back into an observation about the world it's a hard part all the little calculations you guys are pretty comfortable with right now how to find an extra edge equilibrium what it means to be pure strategy equilibrium how to solve a game with roll back how to calculate expected values how to find the equilibrium conditions in a repeated game all of those mechanical things they're very valuable and that it's important that you learned them but this is this is also essential those things those calculation techniques don't get you anywhere unless you're able to translate the story into a game okay the way to do this though is the same as the way we've approached the calculation steps do it one step at a time break it down be systematic okay so what I'm going to do now is do at least some of that sample problem I'm gonna try and be as systematic and as explicit as I can about what the outcomes are in the game and how those outcomes translate into preferences okay so with that I'm gonna start on scenario 2a where we've got what I'm gonna do here is I'm actually going to read it you guys follow along I won't read it out loud but I'm going to as I'm scanning this trying to say out loud what I would be doing if I was writing the exam okay so I'm reading Canty environmental condition blah blah blah okay the first thing that I'm looking for is who are the players okay so I'm reading along and by the second sentence I see okay we've got two players it's the PTA and the Chamber of Commerce okay good and the things that I'm going to be looking for is what are the strategies what can they do and what do they care about okay so I'm reading along and I see this the faculty the facility is blocked the utility of both groups will increase okay so they care about blocking the facility and right here well I'm noticing that that's what they care about it's like right here in the same sentence both groups will get their utility increased by 8 units if the facility is blocked so I'm gonna put that plus 8 here that's good ok that's one thing they care about but fighting the facility it takes time and effort that could be spent pursuing are the goals bla bla bla ok so the other thing they care about is the cost of fighting they don't like that it's a hassle it's unpleasant they'd rather be doing other things if they have to fight it's minus 5 okay so right here here's my multiple components okay there's two different parts of the scenario that they care about and both of them raise or lower their utility by an amount that's given okay good well I'm doing that I'm also thinking what are the choices these guys have that's my sense that you guys are pretty good about figuring out the choices so that that few people are having trouble with that in this case it's white okay so right here I've got my first two things I've got my strategies I've got my preferences but in order to tie them together I need to figure out the outcomes I need to figure out what combinations of fighting and not fighting will activate these outcomes that raise or lower utility okay so something I'm gonna do that um didn't actually occur to me until this week when I saw people we're still having trouble with it but it's not that different than what I was having you do with sequential games something you can do with simultaneous games is you can do a matrix of outcomes before you do the game matrix okay so right here I'm gonna do a matrix of outcomes all right we're gonna have the PTA here choosing fight or not fight we're gonna have the Chamber of Commerce here also choosing fight and not fight and what I'm doing here is precisely the analog of what I did at the beginning of class of the beginning of the quarter with our remember our old incumbent challenger fundraising not fundraising not example we remember the very first time I set this up I wrote the outcomes at the bottom of the tree before I put the payoffs in and I said that would sort of help you okay and what I wrote in here was who won the election okay so the incumbent wins here incumbent challenger incumbent this is the step that sometimes you can do in your head but if you're fine if you're gonna exam and you find yourself staring at that question and you're not just not sure how to set up the game do this step try to write it out in the simultaneous world a way to do it is just to make this matrix and instead of going right to putting the payoffs here let's use these cells to write what we think is going to happen okay so okay do you huh look at the scenario here if they both fight looking at the scenario the center's gonna be blocked and they like that they both pay the cost of fighting right in this first scenario I'm looking at the third sentence from the end the third and second sentences from the end the third sentence that ends with both a comment period that saw Charming okay if neither group fights the faculty will certainly be blocked okay so here we know it's not blocked no costs okay but just continuing to read if either or both groups fight it will certainly block that's the scenario that part changes with other ones okay so on these diagonals yes the center is blocked that's one of the things we care about okay here COC plays cost of fighting the PTA doesn't here it's blocked PTA pays class so this matrix isn't a game matrix a game matrix has to have payoffs but it can help us set up the game matrix okay if you wanted we could just go ahead and try and put the payoffs here I would not advise it it does not take long to draw another matrix and one of the issues with simultaneous games is when you're figuring out what's the equilibrium what isn't you have to remember which number to look at and which ones you're comparing it to that's always an issue and it's easier to get that right if you're looking at an uncluttered picture than if you're looking at a picture that's got a lot of stuff in it okay so I if it was me we're just write a new matrix here that it's going to be a game matrix but I'm going to use what I've got here and when I've got in this part of the board these are like these are the notes that I would actually be writing down if I was solving that problem on an exam and I'm gonna put them together to figure out the payoffs okay so here when they both fight yes the facility is blocked that gives us both an increase of eight that we both fought so we both lose five so we're both getting a payoff of when you write down the things that they care about it will often be the case that there is an implied baseline okay the baseline was just implied in the way that the problem was set up I continued to imply it here what is the baseline case here what happens in the the baseline whatever what are these both relative to not fighting and not having the center blocked right okay if you don't have the center blocked you don't get the point the +8 and if you don't fight you get that don't get the minus 5 that's our zero okay you don't subtract off eight if the center is not blocked okay because that would be saying that the value of blocking the center would actually be 16 okay so it's just +8 switch colors relative to the center not being blocked okay the center not being block has zero effect blocking the facility gives you eight more than what the center gives you this is relative to not fighting now this game actually has a cell where the baseline outcome happens okay we're neither player fights and where the facility is not blocked so it has a cell with payoffs of zero zero that won't always happen okay sometimes your baseline will appear nowhere in the game okay that everything that could happen will cause you your utility to be increased or decreased relative to the natural baseline it's also possible we've had some homework problems where I basically where I don't ask you to add up utility sometimes you don't sometimes the outcomes don't have this off multiple component aspect when I set up cops-and-robbers for example we weren't putting things together okay so I'm just letting you know you don't always have to do that but part of what you need to be able to do when you're reading those scenarios is figure out whether there are multiple components to the utilities and if so how to put them together okay so let's keep on doing that we've got these outcomes here when the PTA doesn't fight and the Chamber of Commerce does according to the setup in this case just the Chamber of Commerce fighting is enough to block the facility okay so both of those guys are going to get the plus eight I'm just gonna put that here and only the Chamber of Commerce is gonna lose the five in this case the PTA gets the benefit of having the center blocked they didn't fight they got to do other things so their payoff is eight okay now following my own advice I'm not gonna leave that as a minus five I'm gonna really leave my simultaneous game in as pure form as possible again because I want to really help myself focus in on the right information I want to have that there as a three okay so I might if this was an exam right below that this was 8-5 okay and it's going to be exactly the same here when the PTA fights and the Chamber of Commerce doesn't the PTA gets the eight losses the five four and out of three and the Chamber of Commerce gets the whole eight that's setting up that game again on the exam that's going to take you some time that's going to be a lot of work and you shouldn't feel like you're doing something wrong if that's a lot of work you should be able you should be at the point where after you set it up answering the rest of the questions is pretty straightforward alright so what is the question find all the Nash equilibria here see any Nash equilibria first one fight not fight that's a good one here's why so another one right right here that's right okay this game is chicken the way it works out okay it works out just like chicken we're fighting is the equivalent of swerving and not fighting is the equivalent of going straight and you can actually imagine the leaders of these two groups having the kind of stand down that would look like chicken what would it look like in reality the Chamber of Commerce president really pounding the table about how important their other work is I'm not gonna fight okay my organization has too many other things to do trying to make the PTA think that this is focal and vice versa okay are we done we're not done okay it didn't say find all pure strategy Nash equilibrium it said find all the equilibrium we might even remember that these coordination games of which chicken is one usually do have mixed strategy equilibria okay so we've got to look for that as well even if we don't remember that if we don't it doesn't that could affect your performance on the task to say this is chicken or not just as I say might help you organize it a little bit but even if you don't remember that's chicken you know that you gotta look for mixed strategy equilibrium now part of you might be remembering oh and there's a dominant strategy then there aren't gonna be anemic strategy Nash equilibria and that's true okay nobody has a dominant strategy here though that doesn't get us out of it okay if I'm the PTA yes the Chamber of Commerce fights I prefer not to fight if they don't fight I prefer to fight so no dominant strategies again that's making me think that I should look for the mixed strategy equilibria okay so formic strategy equilibrium what am I looking for I'm looking for a probability I'm gonna say let P be the probability that the PTA fights okay and I know how to find the value of P if this P is in equilibrium it's got to be the value that makes the Chamber of Commerce exactly indifferent between fighting and not fighting logic there is that if the PTAs probability of fighting gives the Chamber of Commerce a higher expected payoff from fighting well they're gonna fight for sure okay and vice versa if it gives it a higher expected payoff for not fighting they're going to not fight for sure the only way the Chamber of Commerce is going to be willing to play mixed strategy the only way they're not gonna have regrets from it is if the expected utility from both of the pure strategies is the same so let's look at that not too hard in this game the expected utility to the Chamber of Commerce of fighting no uncertainty here okay I fight I'm gonna get a payoff of free I fight in this story my payoff is certain I'm the Chamber of Commerce my expected utility from not fighting here the probability does affect my expected utility okay with probability P if I don't fire get that happy outcome of eight I didn't have to do the fight and the waste treatment plant went somewhere else that's so good I love getting eight but that's only gonna happen with probability P okay and with probability 1 minus P I get this outcome which is zero so that whole thing okay so the value of P that makes COC indifferent between fight and non fight is where three equals eight P equals three ace y activity you know I could do that Tiffany says why didn't I just use the formula here I get three with probability P plus I get three with probability one minus P okay if I do that let's just keep going here so that's three P plus 3 minus 3 P equals 3 so if you're if you're going quickly and like if you don't notice the fact that this it paths are the same here it doesn't take long to do this and you'll be just fine okay the nice thing about expected value if we apply it in a case where we don't really need that we still get the answer that makes sense and that is it's gonna be right in context okay so here we basically found our mixed strategy equilibrium we have to check whether we need to find a probability for the other player as well okay equilibrium always has a probability for each player so we do need to have a probability for each player but in this case because the payoffs are exactly symmetric the chamber of commerce probability is going to be the same - okay say let Q equal the probability the Chamber of Commerce fights okay we can say payoffs are symmetric therefore Q on the phone equals three A's okay it's okay to say that you don't have to read a ride the same thing so coming over here there are two pure strategy equilibria not quite fight and fight not fight the mixed strategy Nash equilibrium is each player fights with probability 3/8 it doesn't fight with probability five days it would be fine to write this out as each player should be no ass will be fine to write this out as PTA fights with probability 3 a's COC is 4 probability 3/8 yes no this is not an answer for the mixed strategy equilibrium ok one probability is not just by itself is not enough ok when whenever I'm asking for an equilibrium you have to you have to let me know that you know that it's something for both players is that what you were asking Alana's saying can we put P equals 3 a's q equals 3 a's would that be enough it would be a feud somewhere above said what P and Q are ok okay um my inclination now is not to do parts B and C for you guys but have you do them as part of your studying and to post the answers for that okay III think that that's better for you that one of the other persistent problems with Poli Sci 30 is it makes so much sense when you're doing it at the board and then you guys do it on your own so I will do that any other questions about this about the exam okay I'm gonna leave you with one last piece of advice about the exam okay and it's something again that I told you before the midterm but remember it now this is not an exam to stay up all night for okay this is an exam to come in rested Frash have lunch do 15 minutes of yoga beforehand go for a run whatever it takes you guys have done most of the learning that you need to now if you haven't done it it's probably not going to happen you're not gonna you're not gonna if you don't get mix tragedies right now you're probably not going to get it in a week okay but what you want to do now is over the weekend solidify your skills work on the practice and then come in sharp enough and calm enough then you're gonna be able to do this systematically okay so with that I think you're gonna do a good job
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
5_Single_photons_Part_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. So, we continue our discussion of quantum states of light, and we focus on single photons as qubits today. But before I get into this, I want to ask you if you have any questions about the last unit we be completed on Wednesday-- namely, the discussion or non classical light-- in particular, squeezing light-- and we also discussed a lot of exciting quantum mechanical effects related to beam splitters. So, any questions? Anything you would like to see discussed? So, as I pointed out, we want to now talk about realization of quantum logic and quantum gates with single photons, but I also said that we are actually using the language of quantum communications-- a kind of information processing to describe general physics in a very nice way. So, first-- when we talk about photons, we should talk about our qubits, and right now, you may think that having a single photon and having no photon in a mode are two possibilities. And you could use them as qubits. However, I will tell you today that it is better, actually, to use always one photon, but in two different wave guides-- two different modes. But, we'll get there in a moment. So, the first task at hand is we want to manipulate photons. And for that, I want to introduce phase shifters and beam splitters. And just to give you the punchline right away, what I want to show you is that those two simple devices -- beam splitter, a half-silvered mirror, a phase shifter, which is just a piece of glass which you can put into one of the laser beams-- that those two elements and those two basic operations, if they are now put together-- if you do a beam splitter, a phase shifter, another beam splitter and such-- you can realize any single qubit operation. Therefore, if you have our qubit cubic state with single photons, any possible state of those qubit can be realized using these optical elements. Therefore, when I say we want to realize how we can manipulate photons with optical elements and such, I'm not going for a zoo of optical elements. Those two will do it all. So, with the phase shifter, that's what we discussed at the end of last class. If you have two modes, and we shift the phase in one of the modes, just simply put a piece of glass into it, then there is a phase shift. And why do we need two modes? Well, phases are usually-- at least in experiments-- not absolutely defined. You need a phase reference, and here, the mode a acts as a phase reference. Let me address one question which a student asked me after class, and this was related to what phases appear in more than one place. The student's question was motivated that if you have one photon, it's a flux state. A flux state is a circle in the causal probability, and therefore, the electric field has no phase. Well, this is the phase of the electric field, but here, we're talking about the phase of the wave function. And a flux state within photons is an eigenstate of the harmonic oscillator. It has a time dependence, e to the i omega t, and we can change its phase. So, this is a kind of phase shift I've introduced here. So, the second element we need to realize-- arbitrary single qubit operation is the beam splitter, and I introduced a beam splitter by just saying, hey look, I think that's a good Hamiltonian. Let's see what this Hamiltonian does to the two modes, and then we realize-- yes, it takes those two modes, and it mixes the modes with cosine theta sine theta reading factors. That's exactly what you expect a beam splitter to do. I'm not sure we need here now because it's part of your homework assignment. You will show that if you have a coherent state, the coherent state is split by a ratio which is cosine square theta sine square theta. So, it's exactly what you'd expect from a beam splitter. So from that, you realize now what this angle theta is, which I just put into the Hamiltonian. The cosine square and sine square of it is the reflection and the transmission of the beam splitter. Any questions at that point? Let me now introduce matrix representation, which will come in handy. You remember that you transform an operator-- the mode operator, the annihilation operator a and b. We multiply with the beam splitter operator on the left and on the right, b dega-- b ab dega. But there is often a simpler way how the right inside can be written, and this is by using a matrix representation, which goes as follows. We can say that the two operators are transformed by the following matrix, and this matrix represents the beam splitter. So that's what the beam splitter does to the mode operators. Now, we want to talk about what does the beam splitter do to quantum states, to single photon states. So now, we want to transform the single photon states. It's a little bit like, well-- we know what to do in the Heisenberg picture to the operator, but now, we want to see what happens to the wave functions. So, now-- I want to say it very slowly, because it shouldn't lead to any confusion later. We have single photons-- we can have the single photon in mode b, or we can have it in mode a. These are the two possibilities. And right now, I label it like this-- where the first number is your occupation number of mode a, and the second number is your occupation in mode b. You have to be a little bit careful-- I'm just saying that keep your alert level high. Later, I will use qubit representation, where the qubit one means the photon is in mode a, and the qubit zero means the photon is in mode b. So when we talked about logical states of our two levels used, and the photon can be here or the photon can be there, zero means the photon is there. It doesn't mean that we have zero photons. But sometimes, I have to talk about the photons-- and that's what I'm doing right now-- and zero means no photon in this mode. But I will remind you of that as we go along. So we have the two quantum states-- one zero, and zero one. And let's just see what the beam splitter is doing to one photon in mode b. I'm changing notation from one line to the next in my notes. Just give me a split second. Yes. So, let's use this convention. So, just to elaborate what I just said is-- this convention one zero is the direct product of the Hilbert space for mode b, with one photon with the Hilbert space of mode a, and we have zero photons there. And sometimes, you want to denote the state by putting a comma in between. That's all the same. So, how do we transform the quantum state with the beam splitter operator? Well, we could now apply to the quantum stage, but we just learned how to apply to operators, so let's just use what we already know, because we can write the state like this. And now, we want to insert the unit. We want to insert the unit operator b dega b, and now, we can use our knowledge of the operator. We know how this transforms. We had this above the transformation of the operators b and b dega, and when we apply b to nothing-- to no photons, we get no photons. So now, by taking from the page above the transformation or the operator b dega, it's a linear combination of a dega and b dega. We find what we expect-- the photon can now be in the same mode, or it can appear in the other mode. And the coefficients from the transformation of cosine theta minus sine theta. And if you would send the other state through the beam splitter, we find the cosine theta sine theta component. So, since the total probability to have a photon is cosine square plus sine square is unity, we find what we expected, but it's nice to see it. Namely, that b conserves the photon number-- of course, this should have been obvious from the outset, because we used a Hermitian operator, a unitary time evolution, and that is energy- conserving. What happens if we use a state which has one photon in each mode, and we act on it with the beam splitter? Well, it gives a superposition of the two photons can now be in either of the two modes, or they can be distributed one in each mode. And the coefficients are because we have two transformations, products of sine and cosine. Here is cosine square plus sine square theta. Here is a plus sine, square root two plus square root two. So that means if we allow one photon to be in each mode, it leads us actually out of the Hilbert space of single photon states, because we have a certain probability now that we have two photons in each mode. So, we don't want that. So if you want to deal with only one photon, we should restrict our attention, our formalism to the states which have nor more than one photon. And now, if we act on those states with beam splitters, we don't get out of this subspace or fill that space. However, we also want to omit this state because it's more elegant to do it, but also, if you have a single photon state and we have a detector which has not 100% efficiency and we do a measurement-- well, we don't know if we have the state zero one, but we just didn't detect it, or we had the state zero zero. Whereas if you deal with two states where you always have a photon-- exactly one photon-- and you read out your system and you detect nothing, well you discard the measurement. And then, you only take the measurements which have detected one photon. So you have the ability to deal with finite efficiencies and losses in a very straightforward way. Yes? AUDIENCE: The operation of dega 1, 1 doesn't look so unitary to me. PROFESSOR: Yes. And it is in my lecture notes. Minus. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. And it's right in my lecture notes. Sorry. Sometimes, when you're talking, explain it, build it up-- yes. Thank you. Other question? Good. So what I just said, that we want to restrict the Hilbert space to exactly one photon, this is so important for implementations and discussions of quantum information protocols that it has its own name. It's called the dual-rail photon state. The dual-rail photon state space. Exactly one photon, but the photon can either be in mode a or in mode b. So this part-- this dual-rail photon state space-- is a Hilbert space which has a basis which is zero one and one zero, and this is of course a two-level system. In this so-defined Hilbert space, we can have an arbitrary state, [? psi ?] which is-- as in any two-dimensional Hilbert space-- it's a linear superposition of your two base states with coefficients alpha and beta. And the few of them for which we have prepared right now is that any possible state of this Hilbert space can be created from any of the base state, that is from zero one simply by beam splitters and phase shifters. When I taught this unit the last time, I went through the proof, but I want to make room for more in-class discussion like we had on Wednesday with the clicker question, so what I decided is I give you the idea behind the proof, and the formal parts you can simply fill in by reading about it on the Wiki page. So, let me just focus on the idea. So the proof is that if I use some column vector for alpha beta-- that we can identify with the beam splitter and the phase shifter-- we can identify them as rotation operators. This is immediate obvious if I use the transformation on psi by beam splitter, we learned just a few minutes ago how the beam splitter operator acts on the base state. So now, we know how it acts on an arbitrary linear combination. And the matrix here is nothing else than the rotation matrix with the minus sign around the y-axis. So that's a beam splitter. The phase shifter is not changing any state. It shifts one of the states by e to the minus i phi, but I can sort of summarize it by taking half of it out as a prefector, and then adding here e to the i phi over two. And this here is immediately recognized as the rotation matrix around the z-axis. Now, you may ask what about this? Well, this is an irrelevant global phase factor. If you have any state in Hilbert space and you change the global phase, it doesn't matter. Therefore, we are realizing that the beam splitter is a rotation around the y-axis. It's now a definitional thing, but we put a factor of two here by an angle two theta, and the phase shifter is a rotation around the z-axis by an angle phi. Now, with that, we can formulate what is called Bloch's theorem, that any unitary operation of a two-dimensional Hilbert space can be written as a product of rotations. Now, let's first count an arbitrary unitary operation while a two by two matrix in complex space has eight numbers-- four real parts, four imaginary parts. But if the matrix is unitary, they are only four independent elements-- alpha, beta, gamma, delta. And if you parametrize the unitary matrix with alpha, beta, gamma, delta which is shown in the weekly notes, then we can write this arbitrary unitary matrix by a combination of an overall phase factor. Which is irrelevant because it's a global phase factor-- and then a rotation around z, a rotation around y, and a rotation around the z-axis. So, if I rephrase this theorem in modern language, I would say we have a qubit which is a two-level system characterized by two numbers alpha beta. And we can do an arbitrary operation in the Hilbert space of the qubit, which is called an arbitrary single qubit operation. So, can be performed by phase shifters and beam splitters. Just to make sure that this doesn't lead to any confusion, the phase shift which is relevant for this Hilbert space is a phase shift which is relative between the two states, and can be detected by an interference experiment. Those global phase shifts cannot be detected. There is no observable, no procedure to observe those-- unless you would have a a third mode. But then, we would expand the Hilbert space to three levels and then, of course, it's the global phase shift in the two-dimension Hilbert space becomes a relative phase shift within the three-dimension Hilbert space. And in modern language, those single qubit operations are called quantum gates. So, all gates which act on a single qubit can be realized with phase shifters and beam splitters. Let me give you one example for that. In quantum computation, quantum information science, there is a very important gate-- the Hadamard gate, which is described by a transformation which has 1 minus 111. And I said it can be realized with a beam splitter and phase shifter. So let me just show it to you in using our symbolic language. So if you look at the matrix we had derived for the beam splitter, you realize that we get the Hadamard gate by aiding the phase shifter by pi. So, in that sense, single qubit operations are checked off. We know how to deal with that. We want to now move towards two qubit operations-- and then, it's getting really interesting. As you may know, you can do any arbitrary operation in many qubits by just having single and two qubit operations. Therefore, once we know how two qubits interact-- how photons in two different qubits interact, we are done-- we have universal quantum gates. The element I need for two qubit operation is the Mach-Zehnder interferometer. Just a side remark, I introduce a Mach-Zehnder interferometer here as a way to manipulate qubits. On the other hand, most of you know that interferometers for light and interferometers for atoms are used for some of the most precise measurements ever done. So, in that sense, this shows the dual use of the formalism I'm introducing here. We really use some of the formalism and the language we develop here for qubit operation, and discuss next week the ultimate accuracy-- the fundamental limits to the accuracy you can get in precision measurements based on interferometry. Let me just write down the sentence because it has a lot of key words in it. So the dual-rail photon presentation of a qubit-- this is what we have achieved so far. So this allows us now to discuss interferometers simply as [INAUDIBLE] gates. And what isn't interferometer-- an interferometer is, in its basic [INAUDIBLE], you have a beam splitter, you have two paths, and then you use another beam splitter which recombines the two beams. So, let me draw two beam splitters. These are two beam splitters. We have an input [INAUDIBLE] b, an input mode a. They are sort of mixed. Here are the two arms of the interferometer. And now, we create output modes b prime and a prime. What we have introduced here is the beam splitter b, and by putting the dot on the other side, this is b dega. Since b dega times b equals one equals the identity, this interferometer is doing nothing. If the photon starts in mode b, it comes out in the upper mode b. If it starts in a, it comes out in the mode a. But of course, this is just the beginning, because we can now introduce here an arbitrary phase shift phi. And now, it is an interferometer. Now, we can read out with the output the phase shift, and ultimately, with very high precision. I what you to appreciate that. If we had used single qubits in one mode-- no photon, one photon. I gave you many reasons why we shouldn't do that, but then, we would be dealing with one mode, and this would actually be something which involves two qubits. But since our single qubit system is a two-level system where the photon is here and there, we can now send single photons with the interferometer-- describe the interferometer as a single qubit operation. At least it's nice. It allows us to keep things simple, and discuss things at a very fundamental level. So, what we want to figure out-- what is this interferometer doing? So, we want to know if we have an input state-- what is the output state? And the input state can now be any arbitrary wave function in our dual-rail Hilbert space. So, it's always one photon, but it can be in a random superposition in an arbitrary superposition of states a and b. Well, with the formalism we have developed now, it becomes very simple. All we have to do is we have to act with the first beam splitter with a phase shifter, and with a second beam splitter onto the state. And this is nothing else than taking two rotation matrices and multiplying them. The beam splitter we have chosen, I said there was sort of a phase convention when I peeked the beam splitter. It is a rotation by pi over 2. This is minus pi over 2, and the z-rotation is by minus phi. I'm not sure if I manage to do that, but if this is the y-axis and we start with the qubit, the first beam splitter rotates it down. Then, the rotation by z-- thus, that-- and the second beam splitter rotates by y. Therefore, the qubit is like that. Therefore, if you count all x's with the things I did is-- I started with a qubit here. It went down like this, like this. So what I did is-- in the end, I simply rotated it around the x-axis. If you want, you can multiply the matrices, or you can draw block sphere visualization. That you say we have an x-axis, y-axis, and z-axis. So the z is vertical. So, we started out with the qubit along the z-axis. The first one was rotation around the y-axis. So then, we were here. The second operation was rotation around the z-axis. And then, we have to rotate back around the y-axis, which eventually gives this as the final state. So ultimately, the product of all this is a rotation around the x-axis. Our interferometer-- I'm not sure if you've ever heard the language, but it's really elegant and powerful. An interferometer with an arbitrary phase in this two-level Hilbert space with this sort of geometric interpretation we have given, is nothing else than a rotation around the x-axis. Therefore, we know immediately the two limiting cases when we rotate by zero, it's sort of balanced. Output and input are the same, because we're not rotating at all. Whereas if we have a phase shift of 180 degrees, we swap the two qubits. So, we swap the two modes, and swapping modes is an inversion of the qubit. So that's almost trivial, but we need this definition-- we need this knowledge to take it to the next step. Remember, we want to do something more interesting than rotating two level systems. We want to get two qubits, combine them, entangle them, and all that. So what we need for that is-- we have to expand the Hilbert space. I introduced the Mach-Zehnder interferometer as the device to do it. But now, we need a way how a second photon from another qubit can interact with the photon of our first qubit. And we want to do that by introducing first the non-linear Mach-Zehnder interferometer. I want to throw in a non-linear element, and then we are ready to allow the second qubit through the non-linear element to manipulate the first qubit. And then, we have interactions between qubit. We have two qubit gates. Let me maybe just deviate for my notes. I really want to show you what I'm doing, and that you have it clearly in mind where I'm aiming it. So what we want to accomplish is-- instead of having a phase shifter with an externally controlled phase, I want to put in some non-linear crystal-- which has an index of refraction which has a phase shift. But now, the face that the non-linear crystal has the property, that its index of refraction changes when I take another laser beam and send it through. So, the laser beam through-- because the index of refraction is intensity-dependent, the green laser beam changes index of refraction of the non-linear crystal-- also called Kerr medium. And that means now, because the blue beam goes through the same crystal, that the green beam controls now the phase shift of the blue beam. And now, we have interacting photons-- one photon interacts with the second photon. Let's work it out. So our goal is to develop a description for non-linear Mach-Zehnder interferometer. What we need is a non-linear medium. So, let me just introduce for two or three minutes non-linear medium, how we describe it, and then we put in into our interferometer. So, in linear optics-- just to remind you, we have a polarization on electric dipole moment, which is proportional to the electric field of light, and x i is the polarizability. The most complicated case, it can be polarizability tensor. Now, this is linear. If you want two non-linear, then we have the linear relationship simply as you can say the first terminal [INAUDIBLE] expansion. So we expand the response of the medium in powers of the electric field, and we have the susceptibilities chi 1, chi 2, chi 3, and so on. We have already encountered a non-linear element which had a chi 2 susceptibility, and these where the crystals we put in our OPOs into our optical parametric oscillators. Remember, the optical parametric oscillator involved-- how many modes? Three. We had one photon, which was broken down into two photons. But that also means that the reverse process is you drive the beam-- you drive the OPO with omega, but the e square term creates an electric polarization as 2 omega. And if you have an oscillating polarization as 2 omega, you create light at 2 omega. So, I just want to remind you that the chi square term is what we have already encountered. That's not what we need for the non-linear Mach-Zehnder interferometer. What we need now is a chi 3 interaction, which is the Kerr medium. Because we want to realize the following Hamiltonian. The Hamiltonian, the Kerr medium is also called cross-- that's cx -- cross-phase modulation. One beam can affect the phase of another beam, and this is called cross-phase modulation. There's also self-phase modulation-- that one laser beam changes the index of refraction for itself-- but here, we want the cross-phase modulation. So, we know that if we have a mode, and we shift its phase, we're not changing any photons. So if we're in an eigenstate of mode a with one, two, or three photons and we simply shift its phase, we are diagonal in a dega a, and the phase shift is proportional to prefector here. So this Hamiltonian is simply a phase shift for photons in mode a, but if you now multiply that with b dega b, the phase shift is now proportional to the number of photons in mode b. Now, we have a situation that the phase shift from mode a proportional to the intensity in beam b, and vice versa. So, I think it's self-evident that this Hamiltonian is sequential description of the process we want to introduce know. This is the cross-phase modulation Hamiltonian. Isn't it much more elegant to describe non-linear optics with Hamiltonians than with parametrized susceptibilities and all that? It's just amazing-- you write in a few letters and it has all the power of the classical description-- plus extra, because it includes quantum fluctuations and everything. I hope you appreciate the power of the language we are developing. This is the cross-phase modulation Hamiltonian. And now, we want to use this Hamiltonian in a crystal of lengths L Well, you know if you have a time-propagation operator, you put the Hamiltonian in the exponent and multiply with t, but for propagating laser beam, t and L are related, so this is now the unitary transformation if you propagate through that crystal. We are now defining that by saying we have this non-linear susceptibility, things will be proportional to the links L, and then we have the operator a dega a, b dega b. And let me choose for the following discussion that the links are chosen, such that-- when multiplied with a non-linear susceptibility, we get a phase shift of pi. Now, we are ready to describe what happens when we have a Kerr medium, and we have two input modes. So, let's write down the table of possible combinations and operations. Well, when we act on the vacuum, we get nothing. What happens when we have one photon in one mode? What happens to that photon? Well, the extra phase shift coming from cross-phase modulation is zero, because we have no photon in one of the modes, and a dega a or b dega b acting on this mode gives zero. Therefore, as long as we have only one photon, we have no phase shift. The exponential factor is one, and we simply reproduce the original state. So, the only non-trivial situation by construction is when we have one photon in each mode, and then we get the matrix element e to the ikl. And we adjusted the lengths of the crystal that this is just minus one. In other words, in this Hilbert space, we get a phase shift of pi-- we get a minus sign when we have one photon in each mode. That's all what this medium does. And if you ever thought about it-- how to deal with this situation-- that when I said there is one photon which provides a phase shift to the other photon. But the other photon is also providing a phase shift to the first photon. So you may wonder. When you have a combined state of the two, how do you deal that one photon shifts the other photon, and vice versa? But when you apply the operator, you don't even have to think about it. You see that the operator for the combined Hilbert space of the two photons gives you a phase shift of pi. It's not pi for one photon times pi for the other one-- it's pi for the whole quantum state. Any questions? We have a Kerr medium, which means we are ready now to put it into our interferometer. I'm doing at this time-- insert picture. This is what you want to do now. We have now three modes of light. You would say, hey-- a qubit, each qubit is two. What is it-- is it one and a half qubit? Yes, it is-- but we extended two cubits in a moment. So what we simply have right now is we have the mode c-- which controls the non-linear medium, and we have what we have already discussed. Our interferometer with a phase shifter, which is now the Kerr medium. So I think even without doing any math, we know that by construction, when we have no photons in c, that the modes a prime b prime transform-- become a and b, because we have a balance interferometer which is the identity transformation, unless we have a phase shift. But when we have one photon in state c, then we actually swap. So, that's pretty cool. A single photon in mode c will redirect the photon from ab to ba. So that's how one photon in mode c controls what happens to the photon in ab. If you want a general description of that, we obtain the output state from the input state by using the matrix b. This was our interferometer based on two beam splitters. It involved mode a and b. But now, we throw in the phase shifter, which is our Kerr operator-- our non-linear phase shifter with b and c. This new addition to the interferometer is described by e to the i, non-linear susceptibility L, b dega b, c dega c. Let me give you one intermediate result. We have now beam splitters on either side. And that leads us to an expression. While the beam splitters do nothing to the mode c, but the beam splitters transform the modes in linear combinations. And if you do some tedious rearrangement of terms-- more details are given in the Wiki-- you find that the essential result is now the operator, which we get is a dega b minus b dega a over 2. I'm not giving you the non-essential terms, which are just phase shifts-- global phase shifts. So, let me write it down in blue here. This is just the result of the operator manipulation. If I would cover that-- what is that? Do you remember a dega b minus b dega a in the exponent? It's a beam splitter. And the beam splitter-- the beam splitter matrix was sine theta cosine theta. So we had a theta here. Now, you realize that this non-linear Mach-Zehnder interferometer, the three optical elements can be replaced by simply a single-beam splitter for mode a and b, but the angle theta of the beam splitter is now controlled by the photon field in state c. In other words, we have now a beam splitter. I told you that a beam splitter is nothing else than a rotation around the y-axis. So now, this non-linear Mach-Zehnder interferometer is simply a beam splitter with a rotation angle given by this term. Any questions? This is a situation we just had, and we can now get to two qubits by simply adding one more rail. Remember, two rails with exactly one photon in it is a qubit. So now, we have a qubit here, and we have a qubit there. And you see immediately one way how the upper qubit x on the lower qubit-- if the lower qubit has a photon in state c, it flips the other qubit. But if the upper qubit has a photon in d, it doesn't do anything to the lower qubit. Now, we have a two-qubit operation. This is shown here. But what I want to discuss now is the situation when we throw in one more beam splitter, and this beam splitter acts on the upper qubit. Well-- isn't it great how quickly we go from simple elements to something which looks quite sophisticated? So, this device now is a single optical device which can entangle qubits, which leads to entanglement. And actually, entanglement is our next big topic we want to talk about-- entanglement between particles, entanglement between photons. And by introducing the Mach-Zehnder interferometer, I was actually building up the situation, which is extremely simple elements which lead to entanglement. We have everything for the description-- each of those elements is described by a matrix which we have developed. Therefore, by just multiplying those matrices, you find immediately how this whole device is now manipulated into qubits. So, if you want, you can just take the crank, use what we have already done, and crank out the result for it. Now I wish I had a bigger screen, because I want to describe what's going on for you. What I want to discuss with you now is what happens when we take this device, and we use exactly the states 0,1 0,1 at the input as I indicated. In terms of qubit language, I say that the first qubit is in spin-up, and the lower qubit is also in spin-up. Spin-down would mean that the single photon is in the other state. Our two-level system-- our dual-rail single qubit two-level system is the photon can be either in one of those states. This one, we call spin-up-- this one, we call spin-down. What I want to discuss with you now, what happens when we start with this symmetric input up-up, which means 0,1 0,1. And since I don't want to scroll up and down so often, everything is trivial beam splitters, except for the situation when we have one photon each here, and then the Kerr medium gives us a minus sign. So that's what I want to put in. Remember, I want to develop the physics in one, two, three temporal steps. The first temporal step is simply two beam splitters, and this is shown here. We start out-- and now we have beam splitter for the upper qubit, beam splitter for the lower qubit. And well, by just multiplying it out, we are now obtaining this summation state for the four different modes. And I'm dropping here factors of square root-- or you can collect them at the end, if you want. And I said the only interesting situation is where we have two photons-- one each in the two middle modes, because now, the Kerr medium kicks in and gives us a minus sign. Now, you can show by inspection that if you now apply the beam splitter to modes a and b, you take 1, 0 into a linear combination of 1 and 1, but the beam splitter is only acting on the last two numbers here. Then you obtain a state which looks very simple. So this is the output state of the device. Any questions? And this output state is-- if I use the spin language, it's an entangled state. So now, you have one simple example how just using linear optics, very simple elements you can start with one photon here, one photon here, a simple product state, and what comes out is an entangled state. Questions? Let me tell you what I'm not telling you-- namely, the Wiki developed by a professor [INAUDIBLE] has now a wonderful section on a famous quantum algorithm-- the Deutsch-Jozsa algorithm. Pretty much using the elements we discuss, you can now realize the Deutsch-Jozsa algorithm, which is one of the famous algorithms where quantum logic-- quantum computation is faster and more efficient than classical computation. I decided not to present it to you, because it just leads to an even more complicated diagram and more formulae. You should just sit back and slowly read it by yourself. There is no new idea introduced-- it's just the concepts we have discussed can lead to very powerful algorithms. What I want to rather do is-- I want to continue our discussion and now talk about the object we have encountered here-- namely, entangled states. But if you have any questions, if you want me to explain anything more about the single qubit manipulation, the dual-rail photon state, I would be happy to do that. Our next chapter is on entangled photons, and what we can accomplish in the next 10 or 15 minutes is mainly a definition and some of the properties. What we will do next week is we will talk about how we measure entanglement-- you already had one homework assignment where you discussed one way to measure entanglement, but there are others which we want to discuss next week. I also want to show you next week how entanglement leads to the Einstein-Podolsky-Rosen paradox and Bell's inequalities. And eventually, I want to show you how you can entangle not only photons, but how you can entangle atoms. But that's an outlook. I think, first of all, we want to understand what is entanglement. Entanglement is-- let me first [? motivate ?] it. Many people regard entanglement as the most quantum essence you can find. Well, there's always a discussion, but it's really quantum. Some people would say waves are quantum. But often, we find quantum systems-- very famously, Bose-Einstein condensates, which really behave like electromagnetic waves. They can be split. They can interfere. But Bose-Einstein condensates are so big, have so many atoms that you hardly ever encounter quantum fluctuations. You're really in the classical limit of matter waves, and I would actually say this is maybe not at the heart of quantum mechanics. While it's nice, it's powerful, it's important-- but this is not really the new feature which quantum mechanics has shown us in nature. You would say-- well, what else is quantum? Maybe certain quantum fluctuations going down to single photons, and I would agree that's a quantization of the electromagnetic field. That's much more quantum than having millions of condensates, millions of condensed atoms in one single wave. But what I would say is even more quantum than the single photon is the entanglement. Entanglement is sort of pure quantum mechanics. With single photons, you can still use intuition. But with entanglement, that is really bizarre. This is really- I hate the word, but let me use it-- this is sort of pure quantum weirdness. And there are people who have spent most of their career until now to just show the weirdness of quantum mechanics, and showing to what peculiar phenomena entanglement leads. So entanglement has really lead to a lot of-- I should be careful with the word surprising-- surprising always means you didn't have enough imagination to think about it, but yes, I would say-- really truly unanticipated, and in that sense, surprising developments. Entanglement is, therefore, for me, the most quantum aspect of quantum mechanics. It was through the Einstein-Podolsky-Rosen argument-- it was actually Einstein-Podolsky-Rosen in the '30s who argued-- Einstein-Podolsky-Rosen were really the first-- this was almost 5 or 10 years after the development of quantum mechanics. But it was Einstein-Podolsky-Rosen who looked at the properties of entangled states. This famous experiment where you have something entangled in position and momentum space. And then, Einstein-Podolsky-Rosen found a paradox, and their conclusion was that the properties of entangled states implies that quantum mechanic is not compete-- is not a complete description of the word. Well, we don't any longer share this opinion. It was John Bell and others who said that entanglement really exemplifies that quantum mechanical correlations go beyond classical mechanics-- go beyond the classic picture. Therefore, you can say if you have a quantum system like a Bose-Einstein condensate, where you have end particles doing in unison what one particle does. This is sort of like the laser where many photons do what one photon does-- but it fits very, very well into the concept of classical probabilities and an intuitive understanding of why many particles do what one particle does. But if the particles are entangled, that's almost like you turbo-charge your system. Your system has now more oomph, more power in it. And this is shown that it has extra correlation-- there is something extra in it, which you would never get from any classical limit. And this extra oomph, this extra power turns out to be a real resource. So, entanglement-- this extra correlation-- is a resource. And resource means it's good for something. It enables teleportation. Remember when we talked about the teleportation scheme-- Alice and Bob could only teleport a quantum state because they shared entangled photons. It was this entangled state which Alice manipulated with a measurement, and Bob used this half of the entangled state to recreate the original state. So, it's the engine [INAUDIBLE] up behind teleportation in the world of quantum computation. The exponential speedup of quantum computers versus classical computers in quantum algorithms is due to the entanglement, which we can put into a quantum system. And eventually, next week, you will see that if you have an atom interferometer with entangled states, we can operate it at a precision which is better than short noise. So in other words, you use laser light to measure something, and you're limited by the fundamental noise limit of classical physics. Now, you entangle your laser beam and you get higher precision out of it. So this shows that entanglement is a very special resource. It's very precious, very powerful and can extend what physics can do. So, let's define it. I'm not describing to you entanglement in the most generous situation. If you talk about many modes-- if you talk about not just pure states, but statistical operators-- it can become quite involved. I'd rather start with the simple situation, which for conceptual discussions is also the most important one. I restrict our focus now on two modes. I want to ask if you have two modes a-- actually, two modes means two subsystems here. If you have two subsystems a and b-- let me tell you what entanglement is not, and then I make it the definition. There wouldn't be anything special if you have a system which has two parts a and b. The global system is psi ab. But if your system would simply factorize into a wave function of subsystem a, direct product with subsystem b, and this here is the tensor product. Then, there would be nothing special going on between subsystem a and subsystem b-- and this is non-entangled. Everything else is entangled. So, this is our definition. When we have two separate systems a and b, we call this situation the total system bipartite. So, a bipartite state which has sort of half of it in system a, half of it in system b. We need those two different systems-- it's a composite system a plus b. And this state is entangled-- if and only if you cannot find two states psi a and psi b, such that the state can be factorized. We need a few examples to fully comprehend this discussion-- what systems a and b qualify. What does it mean if there does not exist any combination? So, let me give you some examples now, and then I think we stop. If you have the 0, 0 state, photons in two mode, we can factorize it-- and this is not entangled. If you have a state which is 0, 0 plus 1, 1-- well, you cannot write it as a product of one state times another state. This is entangled. Well, if you take the following state-- Is that entangled or not? I've written it as a sum of four states, but if you just stare at it for a split second, you realize you can just write it as a product of two states. It's just a product of 0, 1 with 0, 1. So, a state is not entangled if you can write it as a linear superposition of those states. The definition of entanglement is-- if you try hard and hard and hard, and there doesn't exist any product state decomposition, then it is entangled. So, that was easy. If you use a state like-- let me just leave the 1, 1 out. What about this one? Well, you can try as hard as you want-- you will not find the decomposition. So eventually, when I say you have to try hard to find decomposition, maybe what we want is we want to apply some operator or some procedure to this state, and get the answer, yes-no. This naturally asks for question-- how in general can we measure if a state is entangled or not? But this is something we'll discuss next week. So let me just conclude with the following. I'm focusing here and for pretty much all of this cause when we talk about entanglement, about pure states. If you have decoherence-- if you have the transition from pure states to density matrices-- it gets much more involved to talk about entanglement and measure entanglement. But at least the basic definition can still be maintained for statistical operators. If you describe the system-- the total system by one statistical operator-- the system is not entangled. If the statistical operator can be broken up into a direct tensor product of statistical operator for system a and system b. Otherwise, the system is entangled. So the basic definition can be generalized to density matrices, but a lot of things become much, much more messy if you don't have pure states. Any last-minute questions? See you on Monday.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
18_Techniques_for_ultralow_temperatures.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. We have talked in the last class about magnetic trapping. Today, I want to finish our discussion on magnetic traps with two demonstrations of classical forms of magnetic trapping. But I told you that magnetic trapping is a purely classical phenomenon. The only quantum mechanical aspect is that this angle at which the magnetic dipole is oriented with the magnetic field is quantized. I also mentioned to you, and this was Wilks' Theorem, that the total magnetic field can have only local minima, not maxima. And therefore, we you can do magnetic trapping only for weak field seeking states. These are states which lower their energy with photomagnetic fields. And as a result, since the spin is up, the magnetic moment is anti-parallel to the magnetic field, those states can always lower their energy by flipping the spin. I know it's a simple demonstration, but, well, I can't bring a real magnetic trap into the classroom. So what I have here, this is just two little ring magnets, refrigerator magnets. And there is a tube. So I just want to demonstrate the one dimensional form of magnetic trapping to you. And this is our atom. It's in strong magnet, elongated, and it can be spin up or it can be spin down. So what you're seeing is now you're seeing the two magnets. And you're seeing our dipole, our atom, which is in a stable trapping configuration. And you can-- am I blocking it? You can see that it's stable by-- I just move it. And you see how it always comes back. So that's one position. I pull it over. Here, you have a second position where our magnet is trapped. Here is a third one. And here's a fourth one. So there are four positions where we have one dimensional trapping. Well, this is one for one orientation. Now I take the magnet and flip it over. So we are changing now in the laboratory frame. The sign of the magnetic moment. And there is actually a very nice minimum. You see how clearly it is trapped. There is one here in the middle. I can also just show you, it really stays put. There's a weak restoring force. And then here's number three. So it seems very, very rich. I've shown you seven different minima where trapping has happened. And well, just to fill you in, if this is our magnet, these are ring magnets, the magnetic field is about something like that. And if I plot the magnetic field for this configuration, I find indeed that for one orientation, there are four maxima. So for one orientation-- just give me one second. Yeah, there are four maxima of the magnetic field for this orientation. And therefore, there are four minima of the trapping potential. So maybe I should have shown it like this. There are one, two, three, four minima. And for the opposite configuration, there are one, two, three trapping minima. But now if you look, there are one, two, three, four, five, six ones marked in red, which are for strong field seekers, because these are maxima at the magnetic field. And there is only one where the magnetic field has a local minimum. So in the last part, I have shown you the absolute value of the magnetic field. And you realize there are one, two, three, four, five, six maxima. And this is responsible for six trapping configurations. And the seventh one was a minimum of the magnetic field. So in one dimension, there is no problem. In one dimension, I can get a restoring force for the strong and for the weak field seeker for any orientation of the magnetic dipole. But if we want to have a three dimensional trap, we have to look at the three dimensional stability. And it would be only this one configuration, which can be stabilised in x and y. What I've shown you are symmetric fields. So all these minima and maxima only settle points. They trap in one dimension. But with suitable radial magnetic field, you could turn this one here into a real three dimensional trap. But the other ones you cannot, because this would violate Wilks' Theorem. I find it amazing that this demonstration has seven different magnetic trapping configurations. That's just how it turns out to be. Questions? Collin. AUDIENCE: So if you look at the clover leaf coil, the parts of the field that sort of axial [INAUDIBLE] provided by [INAUDIBLE], those are the inner coils? PROFESSOR: Yeah, so if you look at a clover leaf trap, the pinch coils create such a minimum, which provides confinement in the z direction. But then you add a field, a radially outward quadrupole field, and that overcomes-- in this demonstration, everything is actually symmetric. So this is a minimum for this direction. But if you would now plot it as a function of x and y and z, it would be a saddle point. So this configuration is trapping along z, but anti-trapping along x and y. But if you add a quadrupole field, a linear field which points outward with these clover leaves or the [INAUDIBLE] bars, then they would provide confinement, which is stronger than the anti-confinement of the saddle point. If you look at some equations I showed you in the last class, you will actually see that in one of the slides, I had the expression for the magnetic field, where you see how the [INAUDIBLE] bars, the radially linear confinement overcomes the anti-trapping feature of the settle point. Other question? Then finally, let me give you a demonstration for the following effect. I mentioned to you that magnetic traps become unstable when the magnetic field is very low. And I told you, well, at very low magnetic field, the energy difference between spin up and spin down is very small. At zero magnetic field, it becomes zero. And then, the atom cannot adiabatically stay in its MF state, because we can violate the adiabaticity condition. So let me now show you an example of purely classical trapping. And I think a number of you now have seen the levitron toy. Actually, a few years ago when my group was one of the first to use magnetic trapping, in every single talk I explained how magnetic trap worked. And I even showed this demonstration live. But in order to demonstrate it really well, I was building my own levitron. I used machine parts. I used the motor drive. So I hope I can show you one aspect of the levitron demonstration, which is usually not known. So let me first give you the punchline. Magnetic trapping happens with an orientation of the magnetic dipole which could always lower its energy by flipping over. The reason why it doesn't flip over-- well, quantum mechanically, it adiabatically stays in a quantum state-- but classically, if we have angular momentum, it is gyroscopically stabilized. The dipole cannot flip over, because it's a gyroscope. It has angular momentum. And actually, it also has angular momentum in quantum physics. So the two explanations sounded different when I said stays in a quantum state, is gyroscopically stabilized. But if you think about it more deeply, they have much more in common than my language suggests. So I think you know how the levitron works. You spin up the magnet. And I had this nice motor tool to spin it up. And when you have prepared the system. Your atom has now-- atom in quotes-- angular momentum in the magnetic moment. And you bring it to the position where the three dimensional magnetic field fulfills the stability condition. And you can now enjoy magnetic trapping. This is exactly what your atoms do in your magnetic trap. It is gyroscopically stabilized magnetic levitation. The only difference is that gravity has to be compensated. Gravity is a major player. So the stability point in the three dimensional magnetic field configuration includes the compensation for gravity. But I have to say, my group was also at some point trapping a Bose-Einstein condensate in a few hertz weak trap, where gravity was the strongest force of all. So what you have seen is an exact demonstration of the principle for magnetic trapping. But now comes my question. What would you expect, what would happen to our magnetic trap when we spin the levitron faster? Does it help or does it hurt to spin up the levitron, to spin up the gyroscope, to higher angular velocities? So three possibility. Nothing happens. It doesn't matter as long as it spins. The second one, the trap becomes more stable. The third possibility, the trap becomes unstable. Do you want to offer any opinion? Collin. AUDIENCE: Well, we have an angular momentum, right, of the really large omega. When you apply torque, it's going to get torque [INAUDIBLE], right. It's going to be the prefactor of omega. So we have a giant omega, we imagine that [INAUDIBLE] small, [INAUDIBLE] torques. No, no, no wait a second. I did this the other way. It was [? healing. ?] Never mind. PROFESSOR: So do we have to do the experiment? Maybe. So now I put the motor controller to full speed. And I speed it up much faster. You can hear the sound. I really want to do a careful experiment. So we wait until everything is quiet and has died out. And now, we want to try if we can can do magnetic trapping. And you see it's impossible. The system, when it reaches the point where magnetic trapping would occur, it's unstable. But then, well, just to prove that it is only the speed of rotation which has caused the instability, I just wait until friction has slowed down the angular velocity. And now again, it works perfectly. So you see, if you rotate the gyroscope too fast, it's bad. It makes the magnetic trap unstable. Convinced? How would you explain that? If we have a gyro-- Collin? AUDIENCE: Why aren't we working in the limit where we assume that the magnetic field generated by our magnet is sort of weak compared to the trap. So the magnet's really modifying [INAUDIBLE]. PROFESSOR: No, we assume here-- and I can give you the analysis-- but yes. No, these are permanent magnets. So the floating magnet is just, you can say, a probe, a test object which is put into the permanent magnetic field of the stronger magnets of the stationary magnet. AUDIENCE: Get an additional torque to-- because it gets an additional force into the upper state. So it gets an additional torque in towards the center. PROFESSOR: It's not necessarily the additional torque. Let's put two things together, it's really fascinating, from different principles. The first thing is magnetic trapping requires-- if you have a magnetic trap and you have an inhomogeneous magnetic field. And of course, you need an inhomogeneous magnetic field, the angle cosine theta between the spinning dipole and the magnetic field should stay the same. And this means quantum mechanically, we stay in the same quantum state. So therefore, a magnetic trap only works because the rapidly precessing spin, when the magnetic field always precesses around the magnetic field. And when the magnetic field tips, the precession keeps the dipole, the magnetic moment, aligned with the magnetic field. Now, what happens in a gyroscope with a precession frequency when you spin the gyroscope faster? We've seen that in your classical mechanical demonstration, if you had the spinning gyroscope, which was only [INAUDIBLE] suspended with one rope. And then it was precessing in the Earth magnetic field. Does this precession frequency get faster or slower when you spin the gyroscope faster? Pardon? When the gyroscope spins faster, what happens to the precession frequency? AUDIENCE: Slower. PROFESSOR: Slower. Because the torque per unit time adds some angular momentum. This angular momentum adds to the existing angular momentum. But the more angular momentum exists, the smaller is the change. And this lowers the precession. I can give you another example. If you rotate a coin. When does the coin really wobble very, very quickly? Just when it has slowed down and is about to fall. And this rapid wobbling is the precession frequency. So the lower the gyroscopic angular momentum is, angular velocity is, the faster is the precession frequency. And fast precession is important for adiabatic following. So in other words, what you saw here in this demonstration was a classical analogy for Majorana Flops. Now, if we would translate from our classical demonstration to a real atom, what feature, what parameter characterizing the atom, are we changing? So when I spin the magnet faster, what would that correspond to in atomic properties? AUDIENCE: Higher mu. PROFESSOR: Higher mu? I'm not changing the permanent magnetic moment of the magnet by spinning it faster-- higher angular momentum. But in this the equation, what corresponds to higher angular momentum? AUDIENCE: [INAUDIBLE] omega alpha, the precession frequency around the static beam field? PROFESSOR: Yes. But what I mean-- so the Larmor frequency, this is a precision frequency, becomes lower. So what becomes lower in the atomic property? AUDIENCE: Sort of the external magnetic-- PROFESSOR: Pardon? AUDIENCE: h bar? PROFESSOR: No, let's not mess around with h bar here. h bar is given by nature. We can't change that. But, I mean, OK. Multiple choice. B, no, no, no, no. It's g. Yeah, it's g. So what you have seen is, you've seen a demonstration where in front of your eyes, I've changed the atomic g factor. And now you sort of see, let me put it together. What happens is, the mechanical magnet has a given magnetic moment. And if I put much more angular momentum into it, it can sort of-- it has, quantum mechanically, speaking more intermediate states. Because it can change its angular momentum in steps of one. So if I spin it faster, it has many more intermediate states. And each energy separation has become smaller. And smaller energy separation means I'm getting closer to degeneracy where adiabaticity breaks down. Anyway, think about it. The analogy is really deep. Questions? OK. So that's all I wanted to tell you about magnetic trapping. Collin. AUDIENCE: When you increase the angular momentum, you don't necessarily change the spacing between the levels, though. PROFESSOR: No. The energy levels is when the magnet is aligned, it has an energy absolute value of mu times b. Here, it has minus absolute value of mu times b. And mu is simply the magnetic moment of the permanent magnet. So I go from here to there. And the number of energy levels in between is the total angular momentum divided by h bar. So when I give it more angular momentum, in one energy level, in one transition, there is less energy which will be released. And it is actually, you can say, the big note between energy levels, or the difference between energy levels, which is a Larmor frequency. So therefore, the precession, quantum mechanically, is the energy difference between adjacent energy levels. And if you give it more angular momentum, this energy difference becomes smaller. And this low precession frequency, this small energy difference, is bad for adiabaticity. Yes. AUDIENCE: Does that have much of an effect on f equals 1 and f equals 2? Do you get more [INAUDIBLE] if f equals 2? PROFESSOR: Yes. In f equals 2, we have twice the magnetic movement than in f equals 1. But in f equals 2, we have five levels. In f equals 1, we have three levels. So the-- AUDIENCE: Is it that significant of a difference? PROFESSOR: So I think it just cancels out. The g factor-- I showed you the formula. What matters really is the g factor. And the g factor on f equals 2 is 1/2. In f equals minus 1, it's minus 1/2. What happens is the magnetic moment in f equals 2 is larger, because everything is stretched. All angular momenta are aligned. But the multiplicity-- you have five levels versus three levels-- and the two effects just cancel. Other questions? Yes. AUDIENCE: What breaks the system when the magnet spinning gets slower and slower. Now we know why it destabilizes when you spin it too fast. But if you don't spin it at all, it also floats, right? PROFESSOR: OK, what happens is yes. If I stop spinning it, it will no longer work. So what we have is we have a hierarchy of frequencies. The fastest frequency has to be the spinning frequency. Then we have the trapping frequency. And the precession frequency is one over the spinning frequency. So you want that the precession frequency is between spinning and one over spinning, because this is the precession. And if you would take the spinning to lower and lower values, you would violate that hierarchy. Yes. OK, evaporative cooling. Evaporative cooling is a powerful cooling scheme to reach nanokelvin. Actually, I forgot to update this slide. I wanted to say, this is the only technique so far to reach quantum degeneracy for bosons and fermions. Very recently, people have demonstrated laser cooling of atomic strontium to quantum degeneracy. But if you read the paper, it was laser cooling aided not by evaporative cooling, but by collisional distribution of atoms. It's likely evaporation where you evaporate into-- and you keep-- read the paper. It's-- [LAUGHTER] The scheme only worked because collisions-- how to say-- you cooled one region and another region was cooled by collisions. And this brings you pretty much back to further [INAUDIBLE] evaporation. Anyway, what I want to say is there is a small footnote. The field is evolving. You can find now paper's laser cooling to be easy. But if you read the paper carefully, or if you talk to me, I will tell you that there were still collisions necessary, the same kind of collisions which drive the reparative cooling. But before we go into an expert discussion about variants of evaporate cooling, I should first tell you what evaporative cooling is. But somebody raised his hand. AUDIENCE: Oh, no I just [INAUDIBLE]. AUDIENCE: You wrote the paper on [INAUDIBLE], for example, that if your quantum degeneracy [INAUDIBLE]. And that would be just purely laser cooling, right? PROFESSOR: Sub-recoil cooling has not-- any form of laser cooling to sub-recoil temperatures was not compatible with high atomic densities. It only worked at such low densities that they stayed far away from quantum degeneracy. So nanokelvin, yes. Temperature in the nanokelvin range his been reached by laser cooling, but not at sufficiently high densities. The densities, high density, causes collisions. Those collisions are screwing up laser cooling that it doesn't work anymore. And the only technique which can reach nanokelvin temperatures at sufficiently high density is evaporative cooling. And that applies to strontium. Strontium was laser cooled at low density, and then the low temperature was collisionally transferred to high density region. So you can say, in a way, it's an oddity of nature that we are now using quantum gases to reveal new features of quantum physics using ultra-cold atoms. But the cooling techniques which gets us into the quantum degenerate regime is pretty much classical. So this is a cartoon picture how you could think evaporative cooling works. You have a thermal distribution. You remove the high energy tail of the thermal distribution. And then, you allow the distribution to relax. And it will relax to a Boltzmann distribution, which is a little bit shifted towards the cold, or the low energy, side compared to the original dash distribution. And if you do it again and again and again, you wind up with a distribution of atoms which is colder and colder, because every time you axe away the high energy tail, you remove atoms which have, on average, more than the average energy. And therefore, the average energy per atom drops and drops and drops. Of course, the atom number also drops rapidly. And I was only able to draw it in this way because I've been on-- I think I-- actually, did I. I forget what the normalization here is in the plot. AUDIENCE: What is n? PROFESSOR: n is a number of steps. So after 25 removals of the high energy tail, I'm here. And after 50, I have a very, very narrow distribution. So it tells you already something very powerful, which people were not fully aware before evaporative cooling of atoms was invented, that it doesn't take so long. It doesn't take so many steps. It takes 50 rethermalization steps. And each rethermalization takes two or three elastic collisions. So what this already demonstrates, if you can keep your atoms and evaporatively cool by removing the high energy atom, after a time which corresponds to a few hundred elastic collision times, just a few hundred collisions, you can go way down in temperature. So if you have this, I actually used this cartoon picture to write one of the-- to develop a mathematical model for evaporative cooling, which is still the simplest model for evaporative cooling which you can find in the literature. So in any event, but if you think in that way, every time you remove a few percent of the atoms, you realize that you should think in a logarithmic way. Every time you lose a certain percentage of your atoms, you decrease the temperature by a certain percentage. So in the end, if you think either in discrete steps or continuously as a function of time, things should happen exponentially. So if you want to characterize what happens into this system, we should correlate the percentage of temperature change to the percentage of the change in number. And here we have a coefficient. And this coefficient would give us an exponential-- would give us the power law, how temperature and number are related. Or mathematically alpha, which characterizes how much cooling do you get for which loss in the atom number is the logarithmic derivative of number with temperature. All other quantities also scale as power loss of the number. Let me just assume we are in a potential. We have T dimensions. So if you have an harmonic oscillator, it's r square. If you have a linear trap, it's r to the one. And for reasons for simplicity, I took d, the number of dimensions, out. So you choose delta to get the 1d, 2d, or whatever to get a harmonic oscillator or linear potential. OK, Because of equal partition, the temperature, which is a measure for kinetic energy, is equal to the potential energy. And the potential energy is r to the power d over delta. So therefore, the size of the atoms in the trap scales with temperature with the power law. But if the temperature scales with the power law over the number, then the volume of the atoms, the temperature, everything scales with the number of atoms to some exponent. So everything is sort of exponential according to power laws. So I mentioned already, the volume and what I've shown here is just the three parameters which determine-- or the two parameters-- which determine all exponential. The volume goes with the product of delta alpha. The density goes with something. The phase space density is density over temperature to the 3/2. The collision rate is n sigma v. So everything scales with n to the alpha. And all the coefficients are given here. So the question is, what is alpha? So it seems delta is our trapping potential. Once we know alpha, we have a clear prediction what evaporative cooling can do for us. Well, alpha was, remember, it told us how much we lower temperature or energy when we lose a number of atoms. So therefore, all we have to figure out is, when we evaporate an atom, how much energy does it take with it? And therefore, alpha will be determined by some people call it the knife edge. If you truncate the trap at eta kt-- I showed you a cartoon where I used the x to chop off the tail of the Maxwell-Boltzmann distribution at 4kt. So eta would have been 4. So this is what we control experimentally. At what energy do we allow atoms to leak out of the trap. All right. So if you set a threshold of, let's say, 4kt, 4kt is the minimum energy for atoms to leak out. But some will be a little bit faster. In other words, they are not creeping over the edge. They are jumping, they are zipping over the edge. But because everything is thermal, this extra energy is on the order of kt, or 1/2 kt. And in the first analysis, we can neglect it. So in other words, we can say that each atom, when it escapes, takes away the energy eta kt. And eta is the famous eta parameter, which we determine experimentally when we evaporate. Any questions so far? Right now, I've pretty much gone through definitions. And now, I simply look at energy conservation. So now let's look at what is the change of energy during evaporation. Well, for n atoms, this is the kinetic energy. And the extra potential energy for harmonic oscillator, it would be the same. Equal partition for another power law potential that we have to introduce as delta parameter, which defines the power law of the trapping potential. So that means the following, that originally, this describes the number of the total energy. After an evaporation step, delta N is negative. I've lost some atoms and delta T is negative. I am now at a lower temperature and a lower atom number. And the difference is simply the energy taken away by the number of atoms TN which have evaporated. So with that, by just rewriting that, I get a result for the alpha coefficient. The alpha coefficient, which tells us what is the percentage in temperature which-- how many percent is the temperature lowered when I lose a certain percentage of the number of atoms. Actually, 1% drop of the number of atoms gives alpha percent in change in temperature. And this is the alpha coefficient. So I have an analytic expression for that. And sure, you realize if you put your cut eta not at high energy, if you cut at lower energy, than your alpha coefficient can even turn negative. Because then, the atoms which evaporate do not have more than the average energy. But actually, then the model breaks down. OK. So alpha characterizes how much more than the average energy is removed by escaping atoms. So [INAUDIBLE] are very simple once we know what is the threshold of the trap, eta kt. At what energy do we leak out atoms? We have a complete description what is the energy, the phase space density, and such and such after we've lost a number of atoms. But you realize, at least for all of you who do the experiment, something is missing here that sounds almost too ideal. We just evaporate. And we can freely peek what is the energy, or what is the energy threshold and such. The experiment is more constrained. And we have to work harder to get into this good regime of evaporation. But let me introduce the experimental constraint which is very important by asking the question how efficient can evaporative cooling be? So based on the idealistic model, which I've presented to you so far, what is the highest efficiency of evaporation you can imagine? Collin. AUDIENCE: I guess in principle, you could remove just one atom. Then you'd save [INAUDIBLE]. PROFESSOR: Yes. You remove one atom. You wait until you have one atom, which has pretty much all the energy of the system. One atom evaporates and your whole system is as cold as you want to have it. Of course, you are all laughing because this will take much longer than the dwell time of a graduate student at MIT. In other words, what you realize, time is a premium. And it's not just the dwell time of a graduate student. It's not your patience. What happens is in a real experiment, there are losses. There is some form of technical heating. Since you don't have a perfect vacuum, residual gas coalitions cause losses. And its clear you have a time budget which is set by losses. And either you evaporate in this time budget, or you've lost your atoms for other reasons. So that's now what we want to bring in. We can't make a realistic model of evaporative cooling without putting in the constraint of time. And the time constraint is usually determined by losses, by unavoidable atom losses. So now we want to understand what is the speed of evaporation. So we assume. We truncate. We remove an amount dT dN of atoms above this threshold. And then the question is, how fast can we do it again? How long does it take for collisions to replace the tail of the Maxwell-Boltzmann distribution? But now, you can make an analytic model. I was very pleased when I saw that it is so easy to actually get a precise analytic model of that. If either in the asymptotic limit that eta is very high, the number of collisions, there is a certain number of collisions which replenish the tail. And you want to know how fast does it happen. But now you can use detail balance. In an equilibrium situation, those atoms will collide live with the bulk of the distribution. And because they are in a highly improbable state, most of the outcomes of the collision will put those energetic atoms back into the bulk of the Maxwell-Boltzmann distribution. So therefore, the number of particles which arrive in this tail in equilibrium is identical to the number of particles which will leave this tail. And so all we have to do is we have to calculate how many such collisions happen. This is an expression for the fraction of atoms, with the exponential Boltzmann factor, which you can find in this tail. And those atoms collide with a velocity, which is not the thermal velocity. The velocity is larger by square root eta, because those are fast. So by simply multiplying the fraction of the atoms with the collision rate, we find how many atoms per unit time are removed from this tail. And in detail balance, it means the same number of atoms is replenished into the tail. So if you now switch to a continuous model of evaporation, where we constantly evaporate the atoms which are produced through elastic collisions with an energy larger than eta kt, then this here is our rate of evaporation. Since we want to think in terms of time constant, this rate of evaporation is described by a time constant for evaporation. And this time constant is now expressed here by our experimental control parameter eta. Nancy. AUDIENCE: So when we are saying that the collisions are putting atoms back into the lower velocity states, are we saying that the collisions are more defined than Maxwell-Boltzmann distribution? So when you let the system [INAUDIBLE], it automatically goes into a new Maxwell-Boltzmann distribution, and that's what determines the tail. But then we are saying that the collisions are putting the atoms on the table back into the lower velocities. So the collisions are not [INAUDIBLE] Maxwell-Boltzmann distribution? PROFESSOR: No. We assume here that the truncation is only weakly perturbing the Maxwell-Boltzmann distribution. And at least the easiest way to figure out how many atoms are produced per unit time, if atoms in this truncated Maxwell-Boltzmann distribution collide, they produce, with a certain time constant, atoms which will populate the tail. And I can estimate what is this number of atoms which are per unit time fed into the tail by assuming I do not have a truncated Boltzmann distribution. I have a full Boltzmann distribution. And I simply calculate what is the total elastic collision rate of the atoms in this tail. So in other words, I want to know how many atoms are fed into the tail. I get this number of collisions by saying in detail balance, this number of collisions is the same as the number of collisions in the full Boltzmann distribution which goes backward. And with that argument, I can immediately write down an expression for what is the collision rate which produces high energy atoms. Think about it. It's subtle, but it's fairly straightforward. I make the assumption here that eta is sufficiently large, that I can use properties of the equilibrium Maxwell-Boltzmann distribution to estimate those eight constants. And actually, when I found the analytic expression, I could compare to theory, which was much more complicated and used truncated Boltzmann distribution. And in the asymptotic limit of large eta, I was in full agreement with the other results. So yes, we have an expression now for the time constant of evaporation, how fast evaporation happens because of elastic collisions which populate the high energy tail. But usually, when you have a time constant, you want to express it by another physical time. And the physical time which characterizes a gas is the rate of elastic collisions per atom in a gas. So therefore, I want to express the rate for evaporation. The rate at which atoms are produced in the high energy tail is a ratio lambda with the time between elastic collisions. And so we realize, of course, that the atoms which have enough energy to evaporate are not produced in every elastic collisions. Actually, there is an exponential factor e to the eta, because it's only a small part of those elastic collisions which happen which produces an high energy atom which can escape. OK, so with that, we know how many atoms we can lose by evaporation. And this is our expression. But now we have a complete pretty realistic but wonderful toy model to discuss all aspects of evaporative cooling. We have our control parameter eta, which sets the threshold at which atoms can evaporate. And this factor eta determines the two relevant parameters-- alpha, which is the efficiency of evaporation, and lambda, which is the speed of evaporation. If we set eta very high, following Collin's suggestion, we can put it so high that one lost atom, one evaporated atom, can cool all the other atoms to very low temperature. But we know that this would take too much time. So in other words, we have a compromise. If you set eta very high, each atom which evaporates provides a lot of cooling power. But high eta means we have exponential slow down in the evaporation rate. And we have to wait longer and longer, or we never get into evaporation because inelastic collisions and losses has taken its effect. So therefore, it seems clear that this interplay between efficiency and speed is asking for compromise. And this is what we have to realize in the experimental realization. OK, there is one addition we have to make to the model. And this is the following. We have to introduce losses, losses which do not come from evaporation. It can be losses due to background gas collisions, or losses due to inelastic collisions. So let me just show you how I introduce that. I mean, I told you that everything is the logarithmic derivative. The logarithmic change of any quantity goes with the logarithmic change in the atom number times the coefficient. And for reasons which become clear in a moment, I'm now not looking at the temperature, or the density, or the phase space density. I'm really interested in the collision rate, because the collisions rate is what drives evaporative cooling. As long as we have collisions, the cooling can go on. So I want to focus now on how does the collision rate change during evaporation. And during evaporation, what we are changing is the number of atoms, because we evaporated. So by just putting everything we have said together, I have this expression. I assume that the number of atoms changes as a function of time with the rate at which high energy atoms are produced. The time constant for this was lambda, the ratio between the evaporation time and the elastic collision time times the elastic collision time. So this is just re-writing what we have discussed so far. But now I introduce that there is another loss rate due to inelastic collision, technical problems and such, which has a time constant of tau loss. And if I now do introduce the famous ratio of good to bad collisions, good collisions are elastic collisions which drive evaporation. Bad collisions are collisions where atoms are just lost due to technical reasons and inelastic collisions. So if I define this ratio of good to bad collisions, I have now this equation here which tells me how the collision rate changes with time. Just wondering, is a dT missing here or not? Yes. So there should be a dT. So this tells me how the relative or how the collision rate changes with the function of time. But now remember, since this is not the derivative of the collision rate, it's the derivative of the collision rate over the collision rate. It's a logarithmic derivative. What we are talking here about it, please add dT to it, we're talking about is the collision rate exponentially growing when this coefficient is positive. Or is it exponentially decaying when this coefficient is negative? So you realize that we obtain a threshold condition when this coefficient is larger or smaller than zero. And this is called the threshold for runaway evaporation. So let me just summarize the physics of power loss. The physics of exponential increase and decrease actually means that the experimental situation is often talking about a threshold. If you're above threshold and you get evaporative cooling going, you have a positive exponent. And it will go faster, and faster, and faster. If this exponent is negative, you have slowed down evaporation, you can evaporate a little bit. But it will pretty much come to a stand still. So this is why quite often, the experimental realization of evaporative cooling requires to put enough atoms from the laser cooling stage at sufficiently low temperature into a magnetic trap that you fulfill the threshold condition for runaway evaporation. Any questions about that? So in other words, what we have found out, we have found here an expression for the threshold of runaway evaporation. And it tells us that we need a minimum ratio of good to bad collisions. We may need 100 elastic collision until we have one inelastic collision. And then our ratio is 100. And we will see in a minute if this ratio is 100, if then we can make right hand side in such a way that we run away evaporation. OK, so the left hand side is the ratio of good to bad collisions is maybe determined how good our vacuum is. What determines the right hand side? Well, we talked about it. Delta is our trapping potential, linear or quadratic. Alpha and lambda depend on the threshold eta. How aggressive are we in setting a threshold in energy for the evaporating atoms? And what I'm showing you here is the condition. I'm varying the threshold eta. And now I'm figuring out what is the-- if I vary the ratio eta, I call this expression R min. And my ratio of good to bad collision has to be better than that. So if you just look at the solid curve for parabolic trap, this shows you that you have to pick your parameter eta between five and seven. If you pick eta too fast, evaporation is too slow. And you need a much better ratio of good to bad collisions. In other words, you need a better vacuum, for instance. But if you put eta too low, you're not cooling enough, because you're cutting too deep into the distribution. You also realize that if you take a linear power law, like a quadrupole trap, you get the dashed line. And the dashed line has a much lower ratio, has a much lower value of R min. In other words, if your vacuum is not good enough and you have losses, in a linear trap, you can still overcome it by picking your eta in this regime, whereas for a parabolic trap, you need at least two or three times better vacuum to get into the runaway regime. Anyway, this is how you can look at those equations and figure out what is needed to get into evaporative cooling. Collin. AUDIENCE: Do you need to be in the runway regime to get PEC? PROFESSOR: Not necessarily. Some early experiments have done, I think, the first experiment of Eric Cornell, I think they never saw this speed up. They had sort of constant evaporation. Maybe the cooling rate was even slightly going down. So you don't necessarily need the exponential speedup. But you have to be in a regime where at least, if you're not gaining speed, you're not losing too much speed. But, yeah. I mean, you can just take the equations and analyze them and figure out if you're in a favorable regime. And ultimately, it's fairly easy to integrate those equations as a function of time and have completely realistic models. But what I presented to you here is a simple analytic model. And I used the criterion for runaway evaporation to discuss how do you have to pick your truncation parameter, and what happens if you have a different trapping potential. So based on those models, you will find out that if you truncated an eta parameter of six, every truncation of the Maxwell-Boltzmann distribution means about 1% loss in the atom number. And after 600 collisions, after 600 elastic collision times, you have lowered the atom number by 100. But your phase space density has increased by six orders of magnitude. And that means if you have two orders of magnitude in the number and get six orders of magnitude in the phase space density, then your gamma factor, which I haven't really defined it here, but it's the factor which tells you how the phase space density increase. Every order of magnitude in the number boosts the phase space density by three orders of magnitude. And that's regarded as very favorable. So in other words, we've talked a lot about laser cooling. In laser cooling, the standard laser cooling schemes, you're typically six orders of magnitude away form quantum degeneracy. And this tells you how evaporative cooling can get you there. You should expect to lose approximately a number of 100 in the number of atoms. And that's what it takes you to go from laser cooling to quantum degeneracy. And you can estimate what your time is by asking what is the elastic collision rate right after laser cooling. If your elastic collision rate is two seconds, and you take 600 collisions to get to PEC, you know that it would take you 20 minutes. And you better work on a vacuum which has 20 minutes lifetime. Or alternatively, you improve your laser cooling. Or you do some adiabatic compression in your magnetic trap to make sure that your elastic collision time is faster, that you can afford 600 collisions within the lifetime given by other parameters. Any questions? OK, so evaporative cooling happens in the everyday world. If you have water and you blow at the water, the water evaporates. And by evaporation, the water gets colder. So the water gets colder. And it has, of course, this process has a lot of common to the evaporative cooling of atoms in an atom trap. But I want to ask you now why is evaporative cooling more dramatic with atoms? We can get the atoms really cold. But I don't think you've ever seen that you can blow at water and the water freezes. So what is the difference? What is different in evaporation how you encounter it in every day life, and evaporation in the way how I just described it, how we apply it to atoms? AUDIENCE: Your control of eta. Well, in atoms we can really control eta. But by blowing on it, it's just one level we evaporate. PROFESSOR: That's very close. We can pick our eta. But even more so, what is constant? Or what is the parameter which describes evaporation in water? AUDIENCE: Surface tension. PROFESSOR: Pardon? AUDIENCE: The surface tension? PROFESSOR: Some surface tension. But the surface tension always turns into a work function. It tells us if we have water, what is the energy, the work function, for a water molecule to escape? And this work function, it's an energy, would correspond to eta kt. If the water evaporates, well, the work function stays the same sort. It's sort of electron, whatever fraction of an electron volt or whatever it is. But the temperature gets lower. And therefore, the number of molecules which, water molecules, which can evaporate, becomes exponentially smaller. I mean, I've sort of lured you into thinking that keeping eta constant is the most natural thing in the world. Yes, it's the most natural world for us who want to-- efficient evaporation of atoms. But for normal substances, it's the work function which is constant. So therefore, as the system cools, your eta becomes larger and larger. And everything turns into a standstill. So we are actively tuning the work function of our system to sustain a high rate of evaporation. How do we define eta? How do we select the energy threshold for evaporation? Well, for many, many years, Bose-Einstein condensation was mainly done in magnetic traps. And there were two methods. One is just lower the magnetic fields. But lowering the magnetic fields is pretty bad, because it weakens the magnetic trap. And if you weaken the trap, you lower the density. And therefore, just because of that, the elastic collision rate slows down. So what turned out to be by far the superior choice is to remove atoms with our F induced evaporation. So if you have a magnetic trap, you can tune in our F spin flip transition to a certain frequency. But the frequency depends on magnetic field. The magnetic field in a trap depends on position. So what you're doing is, you're selecting with your RF frequency a certain point in space where the atoms can leave the trap and are transferred by a spin flip RF transition to an anti-trap state, and are then rejected from the trap. I mean, this is very flexible. You can change the depths of your magnetic trap by just using an RF synthesizer. And you can change the trap depths. You can lower the trap depths without weakening the confinement potential. I don't have time to go into details, but what I'm showing here is that there is two regimes when you have this magnetic trap. Red is spin up, blue is spin down. And this is your RF transition. There is a regime where you have strong RF, you should now use dressed energy levels. We've discussed dressed energy levels in the optical domain. These are now dressed energy levels in the RF domain. And there's a wonderful chapter about it in atom photon interaction. So it really means in the dressed energy levels become something like this. So you have a potential which looks like an inverted W. And you really realize this potential. But when the RF is weak, you have a certain probability when the atoms go back and forth through the transition region that sometimes they will fall down to the lower state. And this looks more like this. The diabatic picture looks more like that you have a little leak and atoms are trickling down to the non-confining state. Anyway, there are two regimes. And the experiment is usually somewhere in between. It is not necessary to go to the fully adiabatic. It doesn't pay off to go to the fully adiabatic limit. So this is how RF evaporation is implemented. But I should say in these days, a lot of evaporation is now done in optical traps. And in optical traps, the method of choice is you just ramp down the optical trapping potential. Now, when you ramp down the optical trapping potential, addressing Collin's question, you usually do not go into the runaway regime. Because I haven't included that my model assumed we have a constant trap. But if you now add to the model that you are continuously opening up, weakening the trap, you have another effect which makes the exponent for runaway evaporation more and more negative. And I think ultimately, you don't get any runaway evaporation anymore. On the other hand, optical traps are often more tightly confining than magnetic traps. And you have sufficiently high density to begin with. So therefore, you can tolerate a slow down of evaporation and still reach the destination. But anyway, I'm now getting more and more into technical aspects. I think I've given you the concepts. But let me just flash you one picture. This of course, assumes an idealized model where we have only two levels, spin up and spin down. You all know that atoms have hyperfine structure. Sodium or barium has n equals 2. And if you draw now the stress levels for five hyperfine levels in RF transition, it looks fairly complicated. But the result is fairly beautiful. When an atom in F equals 2 reaches this point, and in the dressed atom picture, the point of evaporation, the point where the atom is in resonance with the RF is the point where this potential bends down. The atom is adiabatically transferred in a dressed state from MF plus 2 to MF minus 2. So when you evaporate at this point, without maybe you noticing it, you actually do a four photon transition in the dressed atom picture. Again, it's an example where when we teach about the schemes, we can completely neglect about hyperfine structure. And it's just wonderful to see that the actual implementation works as well for complicated atoms then it does for our idealized two level atom. Final remark. What is the cooling limit for evaporative cooling? When we talked about laser cooling, I derived for you the Doppler limit. And we talked about even improved cooling limits when we discussed sub-Doppler and sub-recoil cooling. So what is the cooling limit for evaporative cooling? Well, the answer is there is no fundamental limit. There is no h bar or quantized limit there. There is a practical limit, which usually depends on residual heating, on inelastic collisions, and all that. But we have reached an evaporative cooling temperatures as low as 400 picokelvin. And the limit was not set by anything fundamental. It was more set by our patience. The lower the temperature, the slower the process becomes. And also, by the sensitivity to technical noise and technical perturbations. Of course, just a final comment which is a segue to what we hopefully do on Friday. For evaporative cooling, I've always assumed that the energy in the trap is continuous. In other words, if you have an harmonic oscillator trapping potential, I've neglected the discrete level structure. And this is an excellent approximation, because many, many atom traps have dressed frequencies of a few hertz or kilohertz. And at very low temperature, even at nanokelvin temperatures, you populate many levels. When it comes to the discrete nature of trapping levels, we should use a quantum description of the motion of atoms in the trap. And this is the regime of sideband cooling. Sideband cooling is much more important for ions, charged particles, and for neutral atoms. So therefore, we will discuss sideband cooling on Friday when we discuss ion traps. Any questions about evaporative cooling? AUDIENCE: So may be technical, but once you get to the temperatures of picokelvins, how do you maintain-- do you keep cooling to maintain that temperature? PROFESSOR: Yeah, usually when we reach very low temperature, there is some form of technical heating. And we've often seen when we prepare cloud at nanokelvin temperature, we can only keep it when we allow a little bit of building evaporation. We've sometimes seen that when we just keep the atoms in an atom trap, they just slowly heat up. But if you keep on evaporating them at a very slow rate, we can maintain low temperatures for much longer. Other questions? OK. So already pretty late. We just have 10 or 15 minutes left. I was hoping that I could already start earlier today to talk about Bose gasses. Today, I was hoping to talk mainly about Bose gasses, on Wednesday on Fermi gasses, on Friday on ion traps. We are sort of half an hour behind schedule, and I have to figure out how I make up for it. But let me just start now with Bose gasses. I've taught about this subject several times at summer schools. And what I give you here is a compressed version on it. I will not talk too much about the ideal Bose gas, because most of you have seen that in statistical mechanics. And I will also omit superfluid hydrodynamics, because well, it's interesting, but it's more special than the other topics. So what I want to cover here is to give you the main ingredients for the description of weekly interacting homogeneous and inhomogeneous Bose gases. And then as a second part, talk about the superfluid Mott-insulator transition. But well, for those of you who are not working with ultra-cold Bose gasses, maybe some of it sounds like jargon to you. But there is one sort of overarching concept, which I want to emphasize in the theoretical description. And this is some form of mean field approximation. When we go through the weakly interacting Bose gas, then we go through superfluid Fermi gasses, and we discuss the superfluid Mott-insulator transition. One theme will be if you have a product of four operators, and that's what you get when you have interacting atoms, you cannot solve anything. And you need a method to go from the product of four operators to the product of two operators. And then you solve a quadratic equation. And the step to go from four to two is called a mean field approximation. And I want to show you three kinds of mean field approximation, for the one you have often seen for the weakly interacting Bose-Einstein condensate. But then I want to show you mean field approximation for fermions, where it is a pairing field which is a mean field, not your usual mean field energy. And then I want to talk about before that even, I want to talk about the superfluid Mott-insulator transition, where we do a very different mean field approximation. So maybe you'll realize a little bit by repeating that scheme how theory is done and how you can deal with simple Hamiltonians, but your can't solve them because they contain products of four operators. So there is sort of a [INAUDIBLE] through those three chapters and sort of showing you how you can do interesting many body physics by doing the right approximation. It also teaches you how many body physics is done. You have an Hamiltonian which you can't solve. And you have to guess the solution and put half of it in an approximation. And once you've done the right approximation, the rest becomes [INAUDIBLE]. So with that spirit, I want to go with you through the Bogoliubov approximation for weakly interacting Bose gas. I don't think I have to say too much about the ideal Bose-Einstein condensate, because it's dealt in pretty much all undergraduate or graduate text books. There are just two things to remember in terms of a system description. First, whether Bose-Einstein condensation occurs or not depends on the density of states. And that depends on dimension and confinement. So the fact that you are in a trap, it changes the density of states. And it changes the criterion for Bose-Einstein condensation. But then in terms of a system description, if you want to describe your Bose gas and its properties, there are aspects of Bose-Einstein condensation which are pretty close to the ideal gas, and others which require many body physics. What is always close to the ideal gas is the transition temperature and the condensate fraction, because what happens is in almost all experiments, when you reach Bose-Einstein condensation, your gas is to a good approximation non-interacting. Because kt, the transition temperature, is much larger than the interaction energy in the gas. So before condensation, or at the onset of condensation, your gas is like an ideal gas. And you will only find a few percent corrections to the formula for the ideal gas. So therefore, if you want to know at what temperature do you reach the transition point, or if you're below the transition point, if you're 50% below the transition temperature, what is your condensate fraction, you can simply look up the original formula by Einstein. And it gives you a reasonably accurate answer. However, for the condensed gas, for the fraction of atoms which are both condensed, those are atoms in one quantum state. For them, there is no other scale than the energy between the atoms. So therefore, for the uncondensed gas, you can get away with an ideal gas approximation. For the condensate itself, we have to put in the many body physics of the interaction. Well, this slide shows here shadow images of expanding Bose-Einstein condensates. We do evaporative cooling in a magnetic trap. You see the shadow picture of the thermal cloud. And you see the onset of Bose-Einstein condensation as the sudden appearance of, it looks like, a pit in a cherry. There is a cool down more confined distribution of atoms in the Bose condensed state. I think it should play again. So this gas is pretty much described by an ideal gas where you can put in the g2 function, the onset of quantum degeneracy. But it looks almost like an ideal gas, whereas in an ideal gas, the Bose-Einstein condensate should be in the lowest energy state of the trap. And I will show you in the next few minutes on Wednesday that we are far away from that. So interactions are negligible for the normal component, for the thermal cloud, but are very important for the Bose condensed gas. So when you take cross sections through this cloud, or 2-D pictures, you see how the broad thermal distribution turns into a Bose-Einstein condensate. And if you look at such a profile, you can clearly see the normal component. The normal component can be, with good accuracy, be fitted by a non-interacting model, whereas central peak is a Bose-Einstein condensate, which requires the description I want to show you now. The condensate fraction is shown here. And again, with fairly good accuracy, it follows the description of Einstein, because the condensate fraction is not a property of the condensate. 1 minus the condensate fraction is a property of the thermal gas. In the spirit of Einstein, Einstein calculated how many atoms can be in the thermal component at a given temperature. So Einstein called it the saturated gas. At a given temperature, you can only keep in thermal equilibrium a certain number of atoms in your gas. If you have more atoms, they condense into the ground state. This is sort of the statistical description. So therefore, what I'm plotting here is the condensate fraction is actually a property of the normal gas. It shows that the normal gas is saturated, can only hold a certain number of atoms, and the remainder of the number of atoms has to be in the condensate. AUDIENCE: I thought in the three dimensional gas, it was three halves. PROFESSOR: Yes, but this is an harmonic trap. And the harmonic trap changes the density of states. Let's talk about the homogeneous Bose-Einstein condensate and weak interactions. If you write down the Hamiltonian for the interactions, it will appear many, many times. The general way to write down interactions between two particles is you annihilate particles in momentum k and p. And then, they reappear at different momenta. So one momentum gets upshifted by q. And one momentum gets downshifted by q. This guarantees momentum conservation. So what I'm showing you here is the elementary process of scattering two particles with momentum k and p scatter, disappear. That's why we have the annihilation operator. And then they reappear to a new momenta. That's the most general form of a binary interaction. And now we have to make approximations. Nobody can solve this Hamiltonian in the most general way. So one approximation we make is that since the range of the atomic interactions is much smaller than the distance between atoms or the thermal [INAUDIBLE] wavelengths, we approximate the potential by a short range potential, or delta function. Well, the Fourier of a delta function is constant. And that would mean in momentum space, this momentum dependent matrix element squared-- yeah, matrix element-- is just constant. So therefore, we can approximate the Hamiltonian by a constant interaction parameter. And then we have the sum over all these creation annihilation operators. I don't want to go into details of low energy scattering physics, but it is most convenient to describe this parameter by u knot, which is the Fourier transform of the interaction potential. Or very often we parametrize it with the s wave scattering lengths, which is the only relevant parameter for elastic collisions at low temperature. So with that, we have now a Hamiltonian which has kinetic energy. And here is the potential energy due to the interaction between the atoms. And we have taken this constant Fourier transform u knot out of the summation. Now, I mentioned already to you that a product of four operators cannot be solved. You need an approximation where you reduce the number of operators from four to two. And then you solve a quadratic equation. And the solution is Bogoliubov solution. So how do we reduce now this product of four operators to two? Well, when we have a condensate where many, many, atoms are in one quantum state, we can make the Bogoliubov approximation. The Bogoliubov approximation is, well, if the creation annihilation operator in the zero momentum state for the condensate has the following matrix element. And now you can see, if N knot, the number of atoms in the condensate is large, well, we can neglect the difference between N knot, N knot plus 1, and N knot minus 1. And we simply make the approximation that the operator a knot and a knot dagger is replaced by the square root of N knot. And then, in this sum of [? our quartic ?] terms, we only keep those terms which have at least two occurrences of the index zero, because the terms which are for instance new occurrence of the index zero don't get this multiplier N knot. So we sort of do an expansion in powers of N knot, and we stop here. So therefore, now if we make sure that any combination of those two involves the index zero, we factor out N knot. And then we have products of k minus k with dagger, dagger k minus k annihilation operator, or we have mixed term a dagger k, a of k, a minus k dagger, a minus k. But this is what we get. The next step is purely technical. We want to get rid of N knot and replace it by N. So N is N knot plus the sum of the population or other momentum states. And that's now our Hamiltonian, which still looks complicated. But it can immediately be solved, because all it involves is a quadratic product of operators. Let me finish this derivation. It takes four to five minutes. I don't think people come in on Mondays. They always come in after us on Wednesday. Is that correct? So let me just continue. I would like to reach the final result with the Bogoliubov transformation. So what I want to show you is that the moment you have bilinear operators, all you have to do is in essence you have to solve a quadratic equation. And because with all the indices and constant, it looks a little bit complicated. But so let me just say that the structure of this Hamiltonian involves sums which are a of k and a of minus k. k minus k is of course important, because that's important for momentum conservation. So let me now call a of k a, and a of minus k b. Then this Hamiltonian has the following structure. It has terms a dagger a, b dagger b, b dagger b. But then it has other terms a dagger b dagger plus ba. Now, let me put it this way. If that wouldn't exist, we would be done. Because an Hamiltonian which has e knot a dagger a is an harmonic oscillator Hamiltonian. It is diagonalized. a and a dagger are just eigenoperators, which create quasi-particles with energy e knot. So if you could eliminate this term, we would be done. So therefore, let's follow Bogoliubov and say that we introduce new operators, alpha beta. And the alpha beta operators are linear combinations of a and b. Or vice versa, a and b are linear combinations of the new operators, alpha beta. And since we have a bosonic system, and we think it's a good idea to keep the system bosonic, we require that those new operators fulfill bosonic commutation relations. Those bosonic commutation relations are fulfilled if u square minus v square, u and v are the linear coefficient which express ab in terms of alpha and beta if that is 1. So in other words, u and v are now two new parameters. One condition for u and v is used up to ensure the bosonic character of alpha and beta. But now, we have a second condition. We can have two conditions for two parameters. So what we do is we just rewrite this Hamiltonian in terms of alpha beta. And that's what we get. It's a bilinear Hamiltonian where the linear transformation stays bilinear. But now, our second condition which we can impose on u and v is that the prefactor of this cross product is zero. So then, by using those two conditions for u and v-- and you find the equations in many textbooks. I'm not discussing them in great detail here. We have obtained an Hamiltonian of this kind. And this is diagonalized. We know now that alpha beta create quasi-particles at a certain energy. So now, by going back, I'm in alpha beta where expressed by a and b. A was a of k. b was a of minus k. I just go back to the original nomenclature. But we have diagonalized the Hamiltonian. So what we have achieved now is our Hamiltonian is written in this following way. Actually, I've recycled a now. a is no longer a particle in a given momentum state. It's now a quasi-particle. So we have now diagonalized the Hamiltonian for the weakly interacting Bose gas. And what we have obtained is, well, we have the full solution. Everything you want to know about this system we know. And in particular, we know what are the characteristic excitation energies for quasi-particle. And this quasi-particle energy gives us the energy as a function of momentum by replacing the parameter u knot by c. c is the speed of sound. I can be rewrite it like this. And you see that it makes sense immediately, because we have now a dispersion relation, which in the limit of high momenta is just the normal kinetic energy. So the quasi-particle are free particles, whereas at low momentum, when this term dominates, the dispersion relation is linear. And linear means sound and phonons. Bose-Einstein condensation is a low energy phenomena. So you should not expect that you change any characteristics of high energy quasi-particles. If you wreck a particle in the Bose-Einstein condensate with high energy, it flies out at a high energy particle. But a low energy excitation creates sound waves. So this is what we have found now. And I will show you on Wednesday how it is observed.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
4_Nonclassical_light_squeezing_Part_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. We continue our discussion of quantum states of light. We talked at length about coherent state, and when you talk about quantum states of light, each mode of the electromagnetic field is an harmonic oscillator. We also encountered, naturally, the number states. And we realized-- yesterday, actually, in the last class-- that those number states have non-classical properties. For instance, they have a g2 function, the second order correlation function, which is smaller than 1, which is impossible for classic light, as you're proving in one of your homework assignment. So at that point, we have encountered coherent states, which are as close as possible to classical states. And we have found the number states as non-classical states. Well, are there other interesting states? I wouldn't ask you this question if the answer would not be yes, and this is what we want to discuss today. We want to talk about non-classical states of light, which we can engineer, actually, in the laboratory, by sending laser light through nonlinear crystals. Those go by the name, squeezed states. Just to give you the cartoon picture, in our two-dimensional diagram, with the quasi-probabilities, we have coherent states, where the area of this disk, delta x delta p, is h-bar over two. It's uncertainty limited. What we can do we now, is-- we cannot go beyond this. This is the fundamental limit of quantum physics. However, we can take this circle and we can squeeze it. We can squeeze it horizontally, we can squeeze it into an elongated vertical shape, or we can squeeze it at any angle. That's what we call, squeezed states. And those states have non-classical properties. They are important for metrology they are important for teleportation. There are lots and lots of reasons why you want to know about them. But again, as so often, I feel I cannot convey to you the excitement of doing squeezing in the quantum domain. And many, many physicists now, they hear about squeezing just in the quantum domain. But I want to start with classical squeezing. I will actually show you video of an experiment on classical squeezing. You can see squeezing with your own eyes. But this is just sort of to set the stage, to also get a feel of what squeezing is. And then we'll do quantum mechanical squeezing. But maybe-- tongue in cheek-- I would say, since classical harmonic oscillators and quantum harmonic oscillators have a lot in common, the step from classical squeezing to quantum of mechanical squeezing is actually rather small. It's nice to squeeze light. It's nice to have those non-classical states. But the question is, how can you detect it? If you can't detect it, you can't take advantage of it. And the detection has to be face-coherent. I will tell you what that is. And it goes by the name, homodyne detection. And finally, we can take everything we have learned together, and discuss how, in the laboratory, teleportation of a quantum state is done. There is a nice teleportation scheme, and I want to use that as an example that the language and the concepts I've introduced are useful. Concepts like, squeezing operator, displacement operator-- those methods allowing us to, in a very clear way, discuss schemes which lead to teleportation. That's the menu for today. Let's start with classical squeezing. For squeezing, we need an harmonic oscillator, means for parabolic potential, we have potential v of x. And then we study the motion of-- that should be x squared-- the motion of a particle in there. Before I even get started any equation, let me explain what the effect of squeezing will be about. If you have an harmonic oscillator, you have, actually, the motion of a pendulum. It has two quadrature components, the cosine motion and the sine motion. And they are 90 degrees out of phase. What happens now is, if you parametrically drive the harmonic oscillator-- you modulate the harmonic oscillator potential-- it's to omega. I will show you mathematically, it's very, very easy to show, that depending on the phase of the drive, you will actually exponentially amplify the sine motion, and exponentially damp the cosine motion. Or if you change, vice versa. So by driving the system, you can amplify one quadrature component, and exponentially die out the other quadrature component. And that is called, classical squeezing. Let's do the math. It's very simple. Our equation of motion has the two solutions I've just mentioned. It has a solution with cosine omega 0, and one with sine omega 0 t. And we have two coefficients. The cosine is called, c. The sine coefficient is called, s. I have to call it c 0, because I want to call that c and s. So what we have here is, we have the two quadrature components of the motion in an harmonic oscillator. And graphically, we need that for the electromagnetic field, as well. When we have our two axes, like, you know, the complex plane for the cosine of probabilities, I call one the s-axis. One is the c-axis. That's just something which confuses me. If you have only one-- just give me one second. Cosine-- Yeah. If you have only cosine motion, the s component is 0, and the harmonic oscillator would just oscillate here. If you have only a sine component, you stay on the x-axis. And now, if you have an equal amount of cosine and sine, then you can describe the trajectory to go in a circle. OK. This is just the undriven harmonic oscillator. I don't want to dwell on it any longer. But what we are doing now is, we are adding a small parametric drive. Parametric drive means we modulate the spring constant, or we replace the original harmonic potential, which was this, by an extra modulation term. So we have a small parameter, epsilon. And as I pointed out, the modulation is at twice the resonance frequency. Now we want to solve the equation of motion for the harmonic oscillator, using this added potential. The way how we want to solve it is, we assume epsilon is very small. So if the pendulum is swinging with cosine omega t, it will take a while for the epsilon term-- for the small term-- to change the motion. So therefore, we assume that we can actually go back and use our original solution. And assume that over a short term, the epsilon term is not doing anything. So for a short time, it looks like an harmonic oscillator with a sine omega 0, and cosine omega 0 t oscillation. But over any longer period of time, the small term will have an effect. And therefore, the coefficients c of t, c, and s are no longer constant, but change as a function of time. We want to solve, now, the equation of motion. That means we use this, here, as our ansatz. And we calculate the second derivative. We assume that the coefficients c and s are changing slowly. Therefore, the second derivative of c and s can be neglected. By taking the derivative of the second derivative of the cosine term and the sine term, of course we simply get, minus omega 0 squared, x of t. And now we have the second-order derivatives. Since we neglect the second-order derivative of c and s, the other terms we get when we take the second derivative is, first derivative of c times first derivative of cosine. First derivative of s times first derivative of sine. So we get two more terms, which are, minus omega 0 c dot, times sine omega 0 t. Plus omega 0 s dot, times cosine omega 0 t. This is the second derivative of our ansatz for x. This has to be equal to the force provided by the potential. So taking the potential-- We need, now, the derivative of the potential, for the potential of use across this line. The first part is the unperturbed harmonic oscillator, which gives us simply, omega 0 squared times x. And the second term, due to the parametric drive, is 2 sine omega 0 t. And now, for x, we use our ansatz for x, which is the slowly-changing amplitude c times cosine omega 0 t, plus s times sine omega 0 t. Those two terms cancel out. So now we have products of trig function. Sine 2 omega times cosine omega. Well, you know if you take the product of two trig functions, it becomes a trig function of the sum or the difference of the argument. So if you take sine 2 omega 0 times cosine omega 0, and we use trigonometric identities, we get an oscillation at 3 omega 0, which is 2 plus 1. And one at the difference, which is omega 0. Let me write down the terms which are of interest to us. Namely, the ones at omega 0. So let me factor out epsilon omega 0 squared over 2. Then we have the term c times sine omega 0 t, plus s times cosine. And then we have terms at 3 omega 0, which we are going to neglect. Now we compare the two sides of the equations. We have sine omega 0 term. We have cosine omega 0 term. And the two sides of the equations are only consistent if the two coefficients of the sine term, and the sine term, are the same. So therefore, we obtain two equations. One for c dot, one for s dot. And these are first-order differential equations. The solution is clearly an exponential. But one has a plus sign, one has a minus sign. So the c component, the c quadrature component, is exponentially amplified with this time constant. Whereas the sine component is exponentially de-amplified. This finishes the mathematical discussion of classical squeezing. We find that s of t, and c of t, are exponential functions. In one case, it's exponentially increasing. In the other case, it is exponentially decreasing. And that means that, well, if we go to our diagram, here-- and let's assume we had an arbitrary superposition of cosine and sine amplitude. This is cosine. This is sine. We had sort of a cosine oscillation, and a sine oscillation. Which means that, as a phasor, the system was moving on an ellipse. If the sine component is exponentially de-amplified, and the cosine component is exponentially amplified, that means whatever we start with is squashed horizontally, is squashed vertically. And is amplified horizontally. In the end, it will become a narrow strip. So this is classical squeezing. You may want to ask, why did I neglect the 3 omega 0 term. Well, I have to, otherwise I don't have a solution. Because I have to be consistent with my approximations. So what I did here is, I had an equation where I have the clear vision that the solution has a slowly varying c and s coefficient. And then I simply use that. I take the second-order derivative, and I have only Fourier components with omega 0, the sine, and cosine. Now I've made an approximation, here. For the derivative of the potential, the first line is exact. But in order to match the approximation I've done on the other side, I can only focus on two Fourier components resonant with omega 0, which I have here. So in other words, the 3 omega 0 term would lead to additional accelerations. Which I have not included in the treatment. So it's consistent with the ansatz. It's consistent with the assumption that we have resonant oscillations with a slowly changing amplitude. There will be a small [INAUDIBLE] for your omega 0, but it will be small. Any questions about that? Let me then show you an animation of that. Classroom files. [VIDEO PLAYBACK] -We have Dave Pritchard, professor of physics at MIT, demonstrating what squeezing is. Right now, we see a wave that's going around in a circle. What's next? What's going to happen now, Professor Pritchard? -Well, if we drive it in twice the basic period, then we will parametrically amplify one quadrature component, and we will un-amplify the other one. So now I'm going to start doing that. And then you notice that its motion turns into an ellipse. We've amplified this quadrature component, but we've un-amplifed that one. And that's squeezing. [END VIDEO PLAYBACK] PROFESSOR: Feel free to try it at home. [LAUGHTER] PROFESSOR: Actually, you may start to think about this demonstration. What he has shown was, when you have a circular pendulum which goes in a circle or an ellipse, and you start pulling on the rope with a certain phase, that one quadrature component will be de-amplified. The other one will be amplified. And as a result, no matter what the circular or the elliptical motion was, after driving it for a while, it will only swing in one direction. And this is the collection you have amplified. There is one thing which should give you pause. I have discussed, actually, a single harmonic oscillator. What Dave Pritchard demonstrated was actually two harmonic oscillators. The harmonic oscillator has an x motion and a y motion. However, you can say, this was just sort of a trick for the demonstration, because when you have a circular motion, initially, you have the sine omega and the cosine omega 0 component present simultaneously. And you can see what happens to the sine and the cosine component in one experiment. So in that sense, he did two experiments at once. He showed what happens when you have, initially, a sine component, and what happens when you initially have a cosine component. OK. So we know what classical squeezing is. And what we have learned, also-- and this helps me now a lot to motivate how we squeeze in quantum mechanics-- you have realized that what is really essential here is, to drive it to omega 0. What we need now to do squeezing in the quantum domain, if we want to squeeze light, we need something at 2 omega 0. So let's now squeeze quantum mechanically. Go back here. The second sub-section is now, squeezed quantum states. What we want to discuss is, we want to discuss a quantum harmonic oscillator. We want to have some form of parametric drive at 2 omega 0. And this will result in squeezed states. Now, what does it require, if you want to bring in 2 omega 0? Well, let's not forget our harmonic oscillators are modes of the electromagnetic field. If you now want to couple a mode of the electromagnetic field, at 2 omega 0, with our harmonic oscillator at omega 0, we need a coupling between two electromagnetic fields. So therefore, we need nonlinear interactions between photons. So this was a tautology. We need nonlinear physics, which leads to interactions between photons. Linear physics means, each harmonic oscillator is independent. So we need some nonlinear process six which will be equivalent to have interactions between photons. The device which we will provide that is an optical parametric oscillator. I could spend a long time explaining to you how those nonlinear crystals work. What is the polarization, what is the polarizability, how do you drive it, what is the nonlinearity. But for the discussion in this class, which focuses on fundamental concepts, I can actually bypass it by just saying, assume you have a system-- and this is actually what the optical parametric oscillator does, is you pump it with photons at 2 omega 0. And then the crystal generates two photons at omega 0. Which of course, is consistent with energy conservation. And if you fulfill some phase-matching condition, it's also consistent with momentum conservation. But I don't want to go into phase-matching at this point. Technically, this is done as simple as that. You have to pick the right crystal. Actually, a crystal which does mixing between three photon fields cannot have inversion symmetry, otherwise this nonlinear term is 0. What you need is a special crystal. KDP is a common choice. And this crystal will now do for us the following. You shine in laser light. Let's say, at 532 nanometer, a green light. And then this photon breaks up into two photons of omega 0. This is how it's done in the laboratory. The piece of art is, you have to pick the right crystal. It has to be cut at the right angle. You may have to heat it, and make sure that you select the temperature for which some form of resonant condition is fulfilled, to do that. But in essence, that's what you do. One laser beam, put in a crystal, and then the photon is broken into two equal parts. And these are our two photons at omega 0. OK. I hope you enjoy the elegance-- we can completely bypass all the material physics by putting operators on it. We call this mode, b. And we call this mode, a. So the whole parametric process, the down conversion process of one photon into two, is now described by the following Hamiltonian. We destroy a photon in mode b, a 2 omega 0. And now we create two photons at omega. We destroy a photon at 2 omega 0, create two photons at omega 0. And since the Hamiltonian has to be Hamiltonian, the opposite, the time-reverse process, has to be possible, too. And that means we destroy two photons at omega 0, and create one photon at 2 omega 0. So now we forget about nonlinear crystals, about non-inversion symmetry in materials. We just take this Hamiltonian and play with it. By simply looking at the Hamiltonian, what is the time evolution of a photon field under this Hamiltonian. We figure out what happens when you send light through a crystal, and what is the output. And I want to show you now that the output of that is squeezed light, which is exactly what I promised you with these quasi-probabilities. We have a coherent state, which is a nice circle. We time-evolve the coherent state, our nice round circle, with this Hamiltonian. And what we get is an ellipse. And if you want intuition, look at the classical example we did before, which really tells you in a more intuitive way what is happening. OK. We want to make one simplifying assumption, here. And this is that we pump the crystal at 2 omega 0 with a strong laser beam. So we assume that the mode, b, is a powerful laser beam. Or in other words, a strong coherent state. We assume that the mode, b, is in a coherent state. Coherent states are always labeled with a complex parameter, which I call beta, now. Well, it's mode b, therefore I call it, beta. For mode a, I've called it, alpha. The coherent state has an amplitude, which I call, r over 2. And it has a phase. We know, of course, that the operator, b, acting on beta, gives us beta times beta, because a coherent state is an eigenstate of the annihilation operator. But when we look at the action of the operator b plus, the photon creation operator, the coherent state is not an eigenstate of the creation operator. It's only an eigenstate of the annihilation operator. But what sort of happens is, the coherent state is the sum over many, many number states with n. And the creation operator goes from n, to n plus 1, and has matrix elements which are square root n plus 1. So in other words, if n is large, and if we don't care about the subtle difference between n, and n plus 1, in this limit the coherent state is also an eigenstate of the creation operator, with an eigenvalue, which is beta star. This means that we have a coherent state which is strong. Strong means, it has a large amplitude of the electric field. The photon states which are involved, n, are large. And we don't have whether it be n, or n plus 1. This is actually, also, I should mention it here, explicitly-- this is sort of the step when we have a quantum description of light. And we replace the operators, p and p dega, by a c number, then we really go back to classical physics. Then we pretend that we have a classical electric field, which is described by the imaginary part of beta. So when you have an Hamiltonian, where you write down an electric field, and the electric field is not changing-- you have an external electric field. This is really the limit of a quantum field, where you've eliminated the operator by a c number. This is essentially your electric field. And we do this approximation, here. Because we are interested in the quantum features of mode a-- a is our quantum mode, with single photons, or with a vacuum state, and we want to squeeze it. b is just, they have parametric drive. With this approximation, we have only the a operators. This is our operator. Any question? AUDIENCE: [INAUDIBLE] would give us a [INAUDIBLE], right? PROFESSOR: Yes, thank you. That means, here should be a minus sign, yes. OK. I've motivated our discussion with this nonlinear crystal, which generates pair of photons. This is the Hamiltonian which describes it. And if you want to have a time evolution by this Hamiltonian, you put this Hamiltonian into a time evolution operator. In other words, you-- e to the minus iHt is the time evolution. If you now evolve a quantum state of light for a fixed time, t, we apply the operator, e to the minus iHt, to the quantum state of light. What I've just said is now the motivation for the definition of the squeezing operator. The squeezing operator, S of r, is defined to be the exponent of minus r over 2, a squared minus a dega squared. This is related to the discussion above. You would say, hey, you want to do that time evolution, where is the i? Well, I've just made a choice of phi. If phi is chosen to be pi over 2, then the time evolution with the Hamiltonian, above, gives me the squeezing operator, below. So with that motivation we are now studying, what is the squeezing operator doing to quantum states of light? Any questions about that? I know I spent a lot of time on it. I could have taught this class by just saying, here is an operator, the squeezing operator. Trust me, it does wonderful things. And then we can work out everything. But I find his unsatisfying, so I wanted to show you what is really behind this operator. And I want you to have a feeling, where does this operator come from, and what is it doing? In essence, what I've introduced into our description is now an operator, which is creating and destroying pairs of photons. And this will actually do wonderful things to our quantum states. What are the properties of the squeezing operator? What is important is, it is unitary. It does a unitary time evolution. You may not see that immediately, so let me explain that. You know from your basic quantum mechanics course, that e to the i operator A is unitary, when A is Hermitian. So the squeezing operator-- with the definition above-- can be written as, I factor out 2 i's over 2 a squared minus a dega squared. And you can immediately verify that this part, here, is Hermitian. If you do the Hermitian conjugate, a squared turns into a dega squared. a dega squared turns into a squared. So we have a problem with a minus sign. But if you do the complex conjugate of i, this takes care of the minus sign. So this part is Hermitian. We multiply it with i, therefore this whole operator. Thus a unitary transformation in [INAUDIBLE]. Any questions? OK. So after being familiar with this operator, we want to know, what is this operator doing? I can describe, now, what this operator does, in a Schrodinger picture, or in a Heisenberg picture. I pick whatever is more convenient. And for now, this is the Heisenberg picture. In the Heisenberg picture, what is changing are the operators. Therefore, in the Heisenberg picture, this unitary transformation transforms the operators. And we can study what happens when we transform the operator, x. The unitary transformation is done by-- the operator, x, is transformed by multiplying from the left side with S, from the righthand side with S dega. You are familiar with expressions like, this, and how to disentangle them. If you have an e to the i alpha, e to the minus alpha, if you could move the alpha past x. So if A and x commute, i A, minus i A would just give unity. So therefore, this expression is just x, unless you have non-Hermitian commutators between A and x. I think you have solved, in your basic mechanics course, many such problems which involve identities of that form. Then there are higher order commutator, the commutator of A with the commutator of a x. Unless one of those commutator vanishes, you can get an infinite series. Our operator, A, is nothing else than the annihilation operator, a squared minus the creation operator, a dega squared. So we can express everything in terms of a, and a dega. The position operator in our harmonic oscillator can also be expressed by a, and a dega. By doing elementary manipulations on the righthand side, and recouping terms, you find immediately that the unitary transformation of the Heisenberg operator, x, gives you an x operator back. But multiplied with an exponential, e to the r. And if we would do the same to the momentum operator, which is a minus a dega over square root 2, we will find that the unitary transformation of the momentum operator is de-amplifying the momentum operator by an exponential factor. If we would assume that we have a vacuum state in the harmonic oscillator, and while classically, it would be at x equals 0, p equals 0, quantum mechanically, we have single-point noise in x, and single-point noise in p. Then you would find that the squeezing operator is amplifying the quantum noise in x. But it squeezes, or reduces, the noise in p. If we apply this squeezing operator to the vacuum state, we obtain what is usually called, squeezed vacuum. And it means that, in this quasi-probability diagram, the action of the squeezing operator is turning the vacuum state into an ellipse. What happens to energy, here? The vacuum state is the lowest-energy state. If you now act with a squeezing operator to it, we obtain a state which has-- the same energy? Is it energy-conserving, or very high energy? AUDIENCE: Higher [INAUDIBLE]. PROFESSOR: Yes. Why? AUDIENCE: It's no longer the [INAUDIBLE]. PROFESSOR: Sure, yeah. It's a vacuum state. We act on the vacuum state, but we get a state which is no longer the vacuum state. The reason why we have extra energy-- the squeezed vacuum is very, very energetic. Because the squeezing operator had a dega squared, a squared. Well a squared, the annihilation operator acting on the vacuum, gives 0. But what we are creating now, we are acting on the vacuum, and we are creating pairs of photons. So we are adding, literally, energy to the system. And the energy, of course, comes from the drive laser, from the laser 2 omega 0, which delivers the energy in forms of photons which are split into half, and they go into our quantum field. In the limit of infinite squeezing-- I will show it to you, mathematically, but it's nice to discuss it already here. In the limit of infinite squeezing, what is the state we are getting? AUDIENCE: Eigenstate of momentum. PROFESSOR: Eigenstate momentum. We get the p equals 0 eigenstate. What is the energy of the p equals 0 eigenstate? AUDIENCE: Infinite. It has to contain all number states. PROFESSOR: It contains all number states? OK, you think immediately into number states, which is great. But in a more pedestrian way, the p equals 0 state has no kinetic energy. But if a state is localized in momentum, p equals 0, it has to be infinitely smeared out on the x-axis. And don't forget, we have an harmonic oscillator potential. If you have a particle which is completely delocalized in x, it has infinite potential energy at the wings. So therefore in the limit of extreme squeezing, we involve an extreme number of number states. Actually, I want to be more specific-- of photon pairs. We have states with 2n, and n can be infinitely large. But we'll see in the classical picture, what we get here when we squeeze it is, we get the p equals 0 eigenstate, which has infinite energy, due to the harmonic oscillator potential. If we would allow with the system now, after we have squeezed it, to evolve for a quarter period in the harmonic oscillator, then the ellipse would turn into an vertical ellipse. So this is now an eigenstate of x. It's the x equals 0 eigenstate. But the x equals 0 eigenstate has also infinite energy, because due to Heisenberg's uncertainty relation, it involves momentum states of infinite momentum. Questions? AUDIENCE: [INAUDIBLE] a is the photon field, right? So p is roughly the electrical field, right? PROFESSOR: Yes. AUDIENCE: So it's kind of that the electric field counts 0, and x is kind of the a, the-- and it-- because of [INAUDIBLE]. The electrical field is squeezed? PROFESSOR: Yes. AUDIENCE: It means we have no electrical field? PROFESSOR: We'll come to that in a moment. I want to do a little bit more math, to show you. I wanted to derive for you an expression of the squeeze state, in number basis, and such. Your question mentioned something which is absolutely correct. By squeezing that, we have now the p-axis is the electric field axis. So now we have, actually, in the limit of infinite squeezing, we have an electric field which has no uncertainty anymore. By squeezing the coherent state into a momentum eigenstate, we have created a sharp value for the electric field. We have created an electric field eigenstate. Well you would say, it's pretty boring, because the only electric field state we have created is electric field e equals 0. But in the next half-hour, we want to discuss the displacement operator, and I will tell you what it is. That we can now move the ellipses, and move the circles, anywhere where we want. So once we have an electric field state which has a sharp value of the electric field at e equals 0, we can just translate it. But before you get too excited about having an eigenstate of the electric field, I want you to think about what happened after one quarter-period it of the harmonic oscillator frequency. It turns upside down, and your electric field has an infinite variance. That's what quantum mechanics tells us. We can create electric fields which are very precise, but only for a short moment. So in other words, this electric field state which we have created would have a sharp value. A moment later, it would be very smeared out, then it has a sharp value again, and then it's smeared out again. I mean, that's what squeezed states are. Other questions? AUDIENCE: That's why [INAUDIBLE]. PROFESSOR: That's why we need homodyne detection. Yes, exactly. If we have squeezed something, which is sort of narrow, that's great for measurement. Now we can do a measurement of, maybe, a LIGO measurement for gravitational waves with higher precision, because we have a more precise value in our quantum state. But we have to look at it at the right time. We have to look at it synchronized with the harmonic motion. Homodyne detection means we look only at the sine component, or at the cosine component. Or if I want to simplify it, what you want to do is, if you have a state like this, you want to measure the electric field, so to speak, stroboscopically. You want to look at your system always when the ellipse is like this. The stroboscopic measurement is, as I will show you, in essence, a lock-in measurement, which is phase-sensitive. And this will be homodyne detection. So we can only take advantage of the squeezing, of having less uncertainty in one quadrature component, if you do phase-sensitive detection, which is homodyne detection. But now I'm already an hour ahead of the course. OK. Back to basics. We want to explicitly calculate, now, how does a squeezed vacuum look like. We actually want to do it twice, because it's useful. We have to see it in two different basis. One is, I want to write down the squeezed vacuum for you in a number representation. And then in a coherent state representation. The squeezing operator is an exponential function involving a squared, and a dega squared. And of course, we're now using the Taylor expansion of that. We are acting on the vacuum state. I will not do the calculation. It's again, elementary. You have n factorial, you have terms with a dega acts on c, well, you pay 2 photons. If it acts again, it adds 2 more photons, and the matrix element of a dega acting on n is square root n plus 1. You just sort of rearrange the terms. And what you find is, what I will write you down in the next line. The important thing you should immediately realize is, the squeeze state is something very special. It is the superposition of number states, but all number states are even because our squeezing operator creates pairs of photons. This is what the parametric down-conversion does. We inject photons into the vacuum, but always exactly in pairs. And therefore, it's not a random state. It's a highly correlated state with very special properties. OK. If you do the calculation and recoup the terms, you get factorials, you get 2 to the n, you get another factorial. You get hyperbolic tangent-- sorry, to the power n. And the normalization is done by the square root of the cos function. And the more we squeeze, the larger are the amplitudes at higher and higher n. But this is also obvious from the graphic representation I've shown you. Let me add the coherent state representation. The coherence states are related to the number states in that way. If we transform now from number states to coherent states, the straightforward calculation gives, now, superposition over coherent states. Coherent states require an integral. e to the minus e to the r over 2, divided by-- Anyway, all this expressions, they may not be in its general form, too illuminating. But those things can be done analytically. I just want to mention the interesting limiting case of infinite squeezing. AUDIENCE: When you do the integral over alpha, is this over like, a magnitude of alpha, or a real part, or [INAUDIBLE]? PROFESSOR: I remember, but I'm not 100% sure that alpha is real, here. I mean, it sort of makes sense, because we start with the vacuum state. And if we squeeze it, we are not really going into the imaginary direction. So I think what is involved here are only real alpha. AUDIENCE: For negative r, we should get [INAUDIBLE]. PROFESSOR: For negative r, we need imaginary state. AUDIENCE: So we should [INAUDIBLE]. PROFESSOR: Let me double-check. I don't remember that. You know that, sometimes, I admit it, the issue-- if you research material, prepare a course some years ago, you forget certain things. If I prepared the lecture, and everything worked out yesterday, I would know that. But certain things you don't remember. As far as I know, it's the real axis. But I have to double-check. The limiting case is interesting. If r goes to infinity, you can show that this is simply the integral, d alpha over coherent states. We have discussed, graphically, the situation where we had-- so these are quasi-probabilities. In that case of infinite squeezing, we have the momentum eigenstate, p equals 0. This is the limit of the infinitely squeezed vacuum, and in a coherent state representation, it is the integral over coherent state alpha. I'm pretty sure alpha is real here, seeing that now. There is a second limit, which happens simply-- you can say, by rotation, or by time evolution-- which is the x equals 0 eigenstate. And this is proportional to the integral over alpha when we take the coherent state i alpha, and we integrate from minus to plus infinity. OK. So we have connected our squeezed states, the squeezed vacuum, with number states, with coherent states. Now we need one more thing. So far we've only squeezed the vacuum, and we have defined the squeezing operator that it takes a vacuum state and elongates it. In order to generate more general states, we want to get away from the origin. And this is done by the displacement operator. The definition of the displacement operator is given here. The displacement by a complex number, alpha, is done by putting alpha, and alpha star, into an exponential function. In many quantum mechanic courses, you show very easily the elementary properties. If the displacement operator is used to transform the annihilation operator, it just does that. If you take the complex conjugate of it-- so in other words, what that means is, it's called the displacement operator, I just take that as the definition. But you immediately see why it's called the displacement operator when we do the unitary transformation of the annihilation operator, we get the annihilation operator displaced by a complex number. So the action, the transformation of the annihilation operator is the annihilator operator itself, minus a c number. So therefore, we say, the annihilation operator data has been displaced. So this is the action of the displacement operator on an operator-- on the annihilation operator. The question is now, how does the displacement operator act on quantum states? And the simplest quantum state we want to test out is the vacuum state. And well, not surprisingly, the displacement operator, displacing the vector state by alpha, is creating the coherent state, alpha. This can be proven in one line. We take our displaced vacuum, and we act on it with the annihilation operator. If we act with the annihilation operator on something, and we get the same thing back times an eigenvalue, we know it's a coherent state. Because this was the definition of coherent states. So therefore, in order to show that this is a coherent state, we want to show that it's an eigenstate of the annihilation operator. So this is what we want to do. The proof is very easy. By multiplying this expression with unity, which is DD dega, we have this. And now we can use the result for the transformation of operators. Namely, that this is simply the annihilation operator, plus alpha. If the annihilation operator acts on the vacuum state, we get 0. If alpha acts on the vacuum state, we get alpha times 0. So therefore, what we obtain is that. When the annihilation operator acts on this state, we get alpha times the state, and therefore the state is a coherent state with eigenvalue alpha. In a graphical way, if you have a vacuum state the displacement operator, D alpha, takes a vacuum state and creates a coherent state alpha. If you want to have squeezed states with a finite value, well, we just discussed the electric field. Related to the harmonic oscillator, we want squeeze states, which are not centered at the origin, which have a finite value of x or p. We can now create them by first squeezing the vacuum, and then displacing the state. AUDIENCE: What's the physical realization of the displacement operator? PROFESSOR: What is the physical realization of the displacement operator? Just one second. The physical representation of the displacement operator-- we'll do that on Monday-- is the following. If you pass an arbitrary state through a beam splitter-- but it's a beam splitter which has very, very high transmission-- and then, from the-- I'll just show that. If you have a state-- this is a beam splitter-- which has a very high transmission, T is approximately 1, then the state passes through. But then from the other side of the beam splitter, you come with a very strong coherent state. You have a coherent state which is characterized by a large complex number, beta. And then there is a reflection coefficient, r, which is very small. It sort of reflects the coherent state with an amplitude r beta. If you mix together the transmitted state and r beta-- I will show that to you explicitly, by doing a quantum treatment of the beam splitter-- what you get is, the initial state is pretty much transmitted without attenuation. But the reflected part of the strong coherent state-- you compensate for the small r by a large beta-- does actually an exact displacement of r beta. It's actually great. The beam splitter is a wonderful device. You think you have a displacement operator formulated with a's and a dega's, it looks like something extract. But you can go to the lab, simply get one beam splitter, take a strong laser beam, and whatever you send through the beam splitter gets displaced, gets acted upon by the beam splitter. Yes. AUDIENCE: You showed the displacement operator, when you acted on the vacuum state, will displace the vacuum state to a state alpha. Does it still hold if you acted on, like another coherent state. Or in this case, a squeeze state like that? PROFESSOR: Yes. I haven't shown it, but it's really-- it displaces-- When we use this representation with quasi-probabilities, it simply does a displacement in the plane. But no. To be honest, when I say it does a displacement on the plane, it reminds me that we have three different ways of defining quasi-probabilities. The w, the p, and the q representation. I know we use it all the time, that we displace things in the plane. But I'm wondering if the displacement operator does an exact displacement of all representations, or only of the q representation. That's something I don't know for sure. AUDIENCE: I was thinking it could also, like-- I mean, are you going to be able to displace all types of light, like thermal light, or any representation of light that you could put in, is the same displacement operator going to work? Or is its domain just the vacuum and coherent states? PROFESSOR: The fact is, the coherent states-- I've shown you that it's a vacuum state. I know that's the next thing to show, the displacement operator if you have a displacement by alpha followed by displacement by beta, it is equal to displacement by alpha plus beta. So displacement operator forms a group, and if you do two displacements, they equal into one area of displacement, which is the sum of two complex numbers. What I'm just saying, if you do the first displacement you can get an arbitrarily coherent state. So therefore, the displacement operator is exactly displacing a coherent state by the argument of the displacement operator. And now if you take an arbitrary quantum state and expand it into coherent states-- coherent states are not only complete, they are over-complete. All you have done is, you've done a displacement. Now the over-completeness, of course, means you have to think about it, because you can represent states in more than one way by coherent states. But if you have your representation, you just displace it, and this is the result of the displacement operator. So since the q representation is simply, you take the statistical operator and look for the elements in alpha, and if you displace alpha, the q representation has been moved around. So I'm sure that for the q representation, for the q quasi-probabilities, the displacement operator shifted around in this place. For the w and p representation, I'm not sure. Maybe there's an expert in the audience who knows more about it than I do. OK. We have just five minutes left. I want to discuss now the electric field of squeezed states. And for that, let me load a picture. Insert picture. Classroom files. Let us discuss, now, the electric field of squeezed states. Just as a reminder, we can discuss the electric field by using the quasi-probability representation. And the electric field is the projection of the quasi-probabilities on the vertical axis. And then the time evolution is, that everything rotates with omega in this complex plane. We discussed it already. For coherent state, we have a circle which rotates. Therefore, the projected fuzziness of the electric field is always the same. And as time goes by, we have a sinusoidal-bearing electric field. Let me just make one comment. If you look into the literature, some people actually say, the electric field is the projection on the horizontal axis. So there are people who say, the electric field is given by the x-coordinate of the harmonic oscillator, whereas I'm telling you, it's the p-coordinate. Well, if you think one person is wrong, I would suggest you just wait a quarter-period of the harmonic oscillator, and then the other person is right. It's really just a phase convention. What do you assume to be t equals 0-- it's really arbitrary. But here in this course, I will use the projections on the vertical axis. OK. If you project the number state, we get always, 0 electric field, with a large uncertainty. So that's just a reminder. But now we have a squeezed state. It's a displaced squeezed state. If you project it onto the y-axis, we have first some large uncertainty. I think this plot assumes that we rotate with negative time, so I apologize for that. You can just invert time, if you want. So after a quarter-period, the ellipse is now horizontal, and that means the electric field is very sharp. As time goes by, you see that the uncertainty of the electric field is large, small, large, small-- it modulates. It can become very extreme, when you do extreme squeezing, so you have an extremely precise value of the electric field, here, but you've a large uncertainty, there. Sometimes you want to accurately measure the 0 crossing of the electric field. This may be something which interests you, for an experiment. In that case, you actually want to have an ellipse which is horizontally squeezed. Now, whenever the electric field is 0, there is very little noise. But after a quarter-period, when the electric field reaches its maximum, you have a lot of noise. So it's sort of your choice which way you squeeze. Whether you want the electric field to be precise, have little fluctuations when it goes through 0, or when it goes through the maximum. So what we have done here is, we have first created the squeezed vacuum, and then we have acted on it with a displacement operator. OK. I think that's a good moment to stop. Let me just say what I wanted to take from this picture. The fact that the electric field is precise only at certain moments means that we can only take advantage of it when we do a phase-sensitive detection. We only want to sort of, measure, the electric field when it's sharp. Or-- this is equivalent-- we should regard light is always composed of two quadrature components. You can say, the cosine, the sine oscillation, the x, and the p. And the squeezing is squeezed in one quadrature, by it is elongated in the other quadrature. Therefore, we want to be phase-sensitive. We want to pick out either the cosine omega T, or the sine omega T oscillation. This is sort of, homodyne detection. We'll discuss it on Monday. Any question? OK. Good.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
12_Resonant_interactions.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon, everybody. Let me just remind you where we are. We're describing interaction between the electromagnetic field and atoms. And we had formulated an exact approach using the time evolution operators and diagrams. And well, I think we understood what it means when atoms in the ground state emit photons which are virtually absorbed and all that. So we figured out what is really inside this formalism and what are the processes. What we want to continue discussing today is one problem which you often have such approaches. And this is the problem of resonance. If you have a perturbative treatment, even if you carry to infinite order, you have, formally, divergences if you have resonant interaction. Because the ground state and the photon has exactly the same energy as the excited state. And that means if you write down the perturbative expansion, you have a 0 in the denominator. You have a divergence. And I reminded you that in a phenomenological way, you've seen that this problem can be "fixed" by adding an imaginary part to the energy level just saying, well, the excited state couples by spontaneous emission to the radiation field. And the level has [INAUDIBLE]. Well, but you know, sometimes putting an imaginary part into Schrodinger's equation means it's no longer unitary time evolution. It has its problem. But anyway, we want to now look deeper into it. I want to show you what are the tools to treat those infinity source divergences in a consistent and a systematic way. And one hint how we have to do it comes by simply taking this energy denominator and expanding it in gamma, simply a Taylor expansion in gamma. And then, we realize gamma is often calculated in second order Fermi's golden rule. But since you have here all orders n, that tells us that doing something here probably means infinite orders in a perturbation series. And that's what I want to show you today. I want to show you that I can go beyond this result. But I can reproduce this result by going to infinite order in perturbation theory. And it means to sum up an infinite number of diagrams. Who of you have actually seen those kind of diagrammatic tricks and summation? A few, OK. So it's maybe nice to see it again. But for those who haven't seen it, welcome to the magic of diagrams. I learned it from those examples. And I really like it. It's a very elegant way to combine equations with graphical manipulations. So that's our agenda for at least the first part of today. And OK, we want to understand the time evolution of this system. Our tool is a time evolution operator. And at the end of the class on Monday, I told you, well, let's simplify things. Let's get rid of those temporal integrations and multiple integrals by simply doing a Fourier transform. Because a Fourier transform turns an integral or convolution into a product. And so therefore, we introduced the Fourier transform, or the Laplace transform, of the time evolution operator. And this iterative equation where we get the nth order by plugging the n minus first order on the right hand side, this iterative equation turns now into a simpler algebraic iterative equation for the Fourier transform. So this is now the starting point for our discussion today. We want to calculate the Fourier transform of the time evolution operator to infinite orders. So let me-- well, of course, any questions before I continue? So unfortunately, [INAUDIBLE]. We should copy this equation. Because we need it. So OK, we want to take this equation. And now we want to iterate it. So the resolving G in 0's order is G0. Now plug G0 into the right hand side of the equation, and you get the first order, which is G0VG0. Now plug that into the end of the iterative equation, and you get the second order, G0VG0VG0. I think you've got the idea. It's almost like a geometric series-- well, a geometric series with operators. But this is already sort of a hint. A geometric series can be summed up to infinity rather easy. So we have to introduce now the eigenfunctions of the unperturbed operator. So these are, let's say, the ground state, excited state, next excited state of your favorite atom. And they have eigenenergies Ek-- nope. So if you are now expressing the equation above in the basis kl-- so in other words, this is an operator, and we want to know the matrix element between eigenfunctions k and l. The other thing we need is that G0. Remember the Fourier transform-- and I gave you sort a mini derivation here-- is just 1 over energy minus H. And if you apply that now to the operator G0, so G0 is nothing like-- is given by that. So therefore, if you write now this equation as matrix elements, the first part, the G0, gives us 1 minus 1 over Z minus Ek. And since we are diagonal in H0, it's delta kl. And well, now you see the structure if you sum over intermediate states, or if you introduce intermediate states. So this is how we have to write it. So this is just writing it down in basis function of the unperturbed operator H0. But now we can formulate the problem we are encountering and we want to solve. Namely, we have the problem with one state. Our problem is the excited state b. And we have a resonant excitation from the count state a. So this is a discrete eigenstate of the unperturbed Hamiltonian with energy Eb. And therefore, we have terms, and actually divergent terms, which are 1 over Z minus Eb. And just to make the connection, this is sort of the description that book Z is in the complex plane and can have imaginary values. Sometimes, scattering and evolution equations are better formulated when you do it in the complex plane. It doesn't really matter for us here. Just remember, Z is the energy. And it is the initial energy. And the initial energy is if it's a ground state and a resonant photon, we have a problem. Because the denominator is 0. So in other words, for resonant excitation, we are interested in the case that Z is on the order of this close to Eb. OK, so what we want to do now is-- and this is the basic idea. By looking at this formally, exact calculation, we say the difficult parts are those where we have this energy denominator, which vanishes. The other parts are simple. They don't have any divergences. They are not resonant and such. So what we want to do is now, in some way, we want to sort of give special treatment, factor out the problematic terms. And the rest is easy. And for the easy part, which has no divergence, we can make any kind of approximation we want without altering the physics. But the resonant part, this needs special attention. Because if I treat it literally in those expressions, they don't make sense mathematically. Because they cause infinities. I could continue to do it completely algebraically. But because I think it's just beautiful method, I want to look at this equation and write it down in symbols. So we want to arrive at a diagrammatic representation for this matrix element of the resolvent Gbb. And I use symbols where the circle means the interaction. The straight line stands for the term which is problematic, which has resonant-- the resonant interaction has the divergent denominator. And then, I'll use a dashed line for all other terms, all other intermediate states, where i is not b, and therefore there is not a problem. It's not resonant. We don't have a divergence. So I'm just sort of-- you see the structure of the sum. We sort of propagate here. We make a transition from l to k. And then, we propagate here. So in this kind of order, you start in this state, you have a vertex, you go to the other state, you have a vertex, you go to the next state, you have a vertex. This is how the system works. This is sort of what quantum mechanics does for you. And what we're going to do is, this is an algebraic equation. And now we want to order them in the following way. We want to figure out which of those expressions include this problematic term exactly twice or three times or four times. So we regroup those infinite sums, these algebraic terms, in such a way that we say, OK, which one has the occurrence of this once, twice, three times, four times? So we regroup the terms, and then we see what we can do. Now, I've picked the matrix element Gbb. So that means over here and over here we start out in the state b. So if I write down all terms which contain this resonant term twice, well, we start with a resonant term. And we have to end with a resonant term, because we have the matrix element Gbb, which I'm focusing now on. And so there can be one vertex. We can start with this term. And now we can do something else in between. But we are not allowed to go back to the state b. Because we are only looking at terms where the solid line appears twice. So therefore, this can only be a dashed line, another state. We can go through two vertices. But it can only include dashed lines in between. So let me use another symbol for it. So we start with that state. We end with a state b. But in between, we can sort of once, twice, three times go through other intermediate states. But we never are allowed to create another divergence. And this infinite sum over all other states-- I don't know how to calculate it yet. But I'll just call it a square box. So in other words, what is sort of in lowest order, the operator V, with these other infinity terms, I symbolize that kind of much more complicated vertex, which includes an infinite sum, by a square and not by a circle. So diagrammatically-- feel free to write it down mathematically. It's obvious how to do it. The square box is the circle plus all terms like this. Let's go back to equations. We call this the function Rb of Z. And what I've shown in diagrams above is nothing else than the circle Vbb, the matrix element of the interaction. But now we have sums where we're not allowed to go through the resonant state. We have to go from b to an intermediate state. We propagate in the intermediate state. And then, we have to go back. And it's clear how to go to higher terms. So we've just defined this function R by focusing on two occurrences of the straight line. Well, let's look at higher order terms. What happens when Z minus Eb, the divergent term, comes to the power n? Well, we just dealt with n equals 2. Let's now look at n equals 3. Colin. AUDIENCE: I'm trying to find the definition. Is the box a T-matrix or the S-matrix? What's the definition of the [INAUDIBLE]. The S is-- PROFESSOR: No, the T-matrix is actually the matrix, the relevant matrix, of the time evolution operator. And if you factor out the delta function for the energy shell, you get the S-matrix. But we're not talking about a time evolution operator here. We're talking about the Fourier transform. And we're looking at the resolvent, which is the function G. And now, we've introduced a function R. I actually tried to look up-- I wanted to use the correct word in class. I couldn't find the name. In the book, it's just called the function R. So it is the function R. And the function R turns out to be the kernel of the resolvent G. But none of it is the S-matrix and T-matrix. It is related. Because if you do the inverse Fourier transform from G, we go back to U. And then, we have, I would say, the T-matrix. AUDIENCE: This is the self energy? PROFESSOR: We will find that R has a real and imaginary part. One is a self energy, and the other one is a decay rate, has an imaginary part. But the real part will be the self energy. Yes, we connect a lot of passwords you may have heard here and there. OK, the question is, do we have to now define pentagons and hexagons? Do we have to find more and more symbols for more and more complicated sums? But the nice thing is no. Because n equals 3 means we have to start in state b. We have to stand in state b. And one time in the time evolution, we can go through state b. And now, between that, we can go from here to there with any combination of states you want. But one thing is not allowed-- to involve the state b. Because we are focusing on three occurrences of the state b. And everything else other than the state b has already a symbol. It is the square symbol. So this is the exact representation for n equals 3. And the contribution to the resolvent G, the Fourier transform of the time evolution operator, is-- well, we have factored out three occurrences of the state b. And then, we have three occurrences of the state b. We need two square boxes. But the square box has already a name and an algebraic definition. So this is nothing else than-- I think it's called the kernel Rb squared. OK, I've shown you n equals 2. I've shown you n equals 3. I assume it's now absolutely clear how to continue. It just involves more and more power. So the lowest order is that. And whenever we ask what happens when we allow more appearances of the state b, for each of them, we obtain another square box. So by looking at the terms which are bothersome and regrouping the infinite terms according to one occurrence, two occurrence, three occurrence of this divergent denominator, we have now found an exact expression for Gb of Z. And since this is now an algebraic equation with a geometric series, we can write it exactly as this minus Rb of Z. Well, like with every exact result, you have to ask, what is the use for it? Because we started out with U, which we couldn't calculate with Fourier transform. We had G, which we couldn't calculate. And now we've expressed G in R, which of course we cannot calculate exactly. But there is an importance. We have made progress for the following reasons. Namely, the first is that those resonant terms, which appears in the time evolution whenever the system goes back to the state b, is now fully accounted for. We have sort of factored those terms out. We've given them special treatment. And therefore, and this is the main result, the exposition which now is the non-trivial expression, the function of the kernel, has no divergences. And therefore, because there is no critical part to it, rather simple approximations can be made and lead to physically meaningful results. That's an idea you may see often in physics. You have a theory whether it's something complicated, non-perturbative divergence. But you just rewrite the theory, transform the equations in such a way that structure of the equations now accounts for the physics behind it. And the part which has to be calculated now can be calculated with crude approximation. And you still get the correct physics out of it. The structure of the equation accounts for the physics. And the numerical part has become very harmless. But what it does is even if you do now a lowest order approximation to the function R, even those very simple approximations correspond to an infinite number of terms in the original expansion. Or in other words, the message is, you have an expression. And if you do a perturbative expansion-- well, maybe you should do some form of perturbative expansion not to the whole expression. You should do it to some denominator or to some part of the denominator. Because then, the perturbative expansion involves no divergent terms and can be performed. So therefore, we are now in a position to make approximations to the function R. And the simplest approximation which we can do is we can just try to see if we can get away with very low order. It's maybe not getting into any divergences here. And let me call this now the triangle. So the circle was the naked interaction. The square would be the exact function if you sum up those interactions to all orders. And the triangle is now, well, the step in between. We hope to get away with a triangle. So that would mean the following, that the exact formulation involved-- let me get black-- had a propagator. The state b has to go through multiple squares. This is how it propagates. This would be the exact result after the resummation we have done. And an approximate result is now that the squares are replaced by triangles. So that's pretty neat. But the importance comes now when I pull things together, I want to show you what we have exactly done for treating an atom in the excited state and for treating light scattering. So in other words, that's our result. We have derived it. I just go now and apply to an excited atomic state. So the state we are interested in is the atomic state b and no photons. And the property of the atomic state is obtained when we know the function Gb of Z. Just to tell you what it means, this is the time evolution operator when we Fourier transform, an inverse Fourier or Laplace transform, which is a contour integral in the complex plane, a generalization of the Fourier transform. This would take us then to the time evolution for the atom in state b. So this is now the diagram matrix element of the T-matrix. And this is the matrix element between state B0B0 and the time evolution operator. So we are calculating, of course, the Fourier transform of the time evolution of the state b by the Fourier transform through the resolvent G. And all the work we have done with our diagrams means instead of calculating G, we are calculating the kernel of Z. And if you use the lowest second order approach, then diagrammatically-- just give me one second, photon was emitted. Yeah, sorry, so the process we have considered is that we go to second order. We can go through an intermediate state, can say, absorbing it and meeting a virtual photon. That's what two vertices mean. At the first vertex, the photon appears. At the second vertex, the photon disappears. So that means now that we, with this approximation, approximate Gb of Z, the Fourier transform, the resolvents, in the following way. We let b just propagate, which is problematic because this has divergences. But we are allowing now the state b to go through intermediate states a. And remember, since this appears in the denominator, that means for G, just make a Taylor expansion in R, that it has this process to all orders. So that propagator, the sort of propagation G, involves now all possibilities that we have to end up in state b again. But we can go through intermediate states a or a prime as often as we want. Or actually, we have summed it up. Our results contain those processes to infinite order. So the question is, what have we neglected? I mean, that looks like a lot. We allow the state b to emit a photon, go to another state. It's reabsorbed and such. So the question is now, what is neglected? So what we neglect is actually a lot. In this lowest order approximation for the function R, we approximated R by second order. We have only two vertices. So what we neglect are all processes where we have not just two vertices and one intermediate states, where we have several intermediate states between two occurrences of the state B0. Or diagrammatically, what we have neglected is-- let me just give you an example. So what we have neglected is we have to always start with a state b, and we have to end with a state b. But now the idea was that-- one, two, let me just go through four vertices. The approximation we have done-- you have to sort of look through that maybe after the lecture and see that that's what we've really done, but trust me-- is that when the system goes away from the state b into an intermediate state a, at the next vertex it has to go back to the state b. So what we have not included is processes where we go through states a, a prime, a double prime, and then eventually we go back to the state a. Or we have not included processes where we scatter, absorb. This is a state a. But then, we don't go back to b. We go to a prime. Then, we scatter again. Here we are in state a double prime. And then, we are back in b. So in other words, we have said whenever something happens, and we go away from state b, the next vertex has to go back to state b. This is the nature of the lowest order approximation. We have included to infinite order all processes where the state b emits a photon, reabsorbs it, emits and reabsorbs it. But the system cannot go sort of two steps away from the state b. It's only one step, and then go back. This is the nature of the approximation. OK, so our result for this kernel, which describes the state b, is we have-- just go back, let you see it again. I'm writing now for you down the equation for the triangle, which is an interaction V, an intermediate state, and another interaction V, which is nothing else than Fermi's golden rule where we-- well, with a little twist-- have the initial state. The dipole interaction or the [? p.a ?] interaction takes us to an intermediate state with a photon with [INAUDIBLE] and polarization epsilon. We propagate in the intermediate state Ea. And now we have to go back. But we have to go back with the same matrix element. So therefore the matrix element is squared. And we have a double sum. We sum over all possible states of the photon. And we can sum over all intermediate atomic states. Yes, so this is what we have done. This expression has in general a real part and imaginary part. It's a function of the initial energy E. And it has an imaginary part. Yes, let me now interpret it in two ways. But before I do that, are there any questions? Yes. AUDIENCE: Just mathematically, how do you get an imaginary part out of this? It's all real components, because we have magnitude squared divided by, presumably, the energies real and so on. PROFESSOR: OK, this is now something to do that we have resonant terms. And what we often do is we add an infinitesimal eta, and then let eta go to 0. And it's the same if you have the function 1 over x. And look at a real part and imaginary part. It needs a little bit of correct treatment of functions in the complex plane. Let me actually write it down. Then it becomes clear. But yes, thanks for the question. So I always said, we do a Fourier transform, but it's a Laplace transform. We do a Fourier transform at real energy. But if you do an integration along when you do the Fourier transform, you have to integrate over omega. But the function we integrate, the time evolution operator, has poles in omega. So we can't just Fourier transform. Because we go right through the poles. But what we can do is we can add an imaginary part plus or minus eta. And we can just go around the poles. And then, it becomes mathematically meaningful. And what we're doing here is-- but I'm not really explaining it mathematically-- we have played those tricks here. But I hope it becomes clear if I say what the real and imaginary parts are. So the real part is this matrix element squared, but double sum. But what we use is the principle part of it, which is well defined in the theory of complex functions. And it's divergent, but you take a certain symmetric limiting procedure. So you have to interpret that as-- you have to introduce a limiting procedure to make sure that the divergence cancels out. And the imaginary part, value of the imaginary part of something which diverges-- and if you treat the eta correctly, let the eta go to 0, you will realize that the imaginary part turns into a delta function. So what we get is 2pi over H bar matrix element squared times the delta function. So that's something which you have seen. So the imaginary part gets us Fermi's golden rule. And the real part has actually-- remember when we discussed the AC Stark shift. The AC Stark shift has a 1 over [INAUDIBLE] dependence. And you recognize that here. So this is actually nothing else than the AC Stark shift not due to a laser beam, but due to one photon per mode. Because we started with an atom in an excited state. It can emit a photon in any mode. And this photon is now creating an AC Stark shift. And this is mathematically the expression. And such AC Stark shifts which appear as self energies, as energy shifts created by the state, this is nothing else than the famous Lamb shift. So that's what we get out here. I have already-- do I in this sum? What we have here is we have this function R in the real and imaginary part, which depends on the energy E. But remember, we worked so hard with diagrams to make sure that the triangle-- first the square, and then the triangle, and this is what we calculate here-- has no resonant structure at the energy Eb. So therefore, we can neglect the energy dependence of that and simply replace the argument E by the energy we are interested in, namely energies close to Eb. So in other words, when we had the function, the resolvent Gb of Z, all the dependence on energy came from this kernel. But this kernel is now so well-behaved, there are no resonant terms, that we can neglect its energy dependence. This actually has a name which we will encounter later when we discuss master equation and optical Bloch equations. So this replace, neglect E and set, or replace the dependence by E, by taking the value at Eb, this corresponds to the Markov approximation. The Markov approximation often means that some relaxation time or some response of a system is replaced by delta function. Well, we've done the same here. Because if you replace some temporal response function by a delta function, that means the Fourier transform becomes constant. And by neglecting the energy dependence, we are now saying everything is constant as a function of energy. And that means in the temporal domain that we have a delta function. I don't want to go further here. But when we talk about the master equation, we will also make a Markov approximation later on, but then in the temporal domain. And the two are equivalent here. So what we've got now is we found a solution for the Fourier transform of the time evolution operator, which initially had a divergence at energy b. And this was the problem we are facing. But by now, calculating the function R, we have a correction, which is a radiative shift, which comes from the real part. And we obtained, as promised, the imaginary part, which we can approximate by Fermi's golden rule. If we now Fourier transform back and obtain the time evolution of this state, it no longer evolves with the energy Eb. It has a shifted energy by this self energy. And this is called the radiative shift. But in addition, because of the imaginary part, it has now an exponential decay. And you should now-- well, this is what we may have expected. But there are two things you should learn. The first thing is that the exponential decay would be different if we had not made the Markov approximation. If we had kept a dependence of this imaginary part on energy, the Fourier transform would not have simply given us an exponential. So therefore, the exponential decay involves an approximation that the R function has no energy dependence. And you would say, well, is that really possible? If you have an atom, and it decays, or you have the atom in state b, at very, very short times you need Fourier transform elements of large amounts of energy. So maybe for the first femtosecond for the time evolution of an excited state, you need whatever, a whole X-ray spectrum of energies. And it's obvious that the properties of this expression where you sum over all states, something will happen when you go past the normal excitation energy or the ionization energy of the atom. So what you can immediately read form here is that exponential decay is a simple approximation. It works very well. But at very early times, it will break down. Because then, the energy dependence matters. But the longer you wait-- if you wait a few nanoseconds, the Fourier transform, the relevant part of the Fourier transform, is only a small energy or frequency interval around the resonance energy. And then, the density of states of your photon field is pretty much constant around here. And then, this approximation is excellent. So I hope what you have learned from the treatment is number one, where the exponential decay comes from. Of course you knew that from coupling to all modes, but that the approximation which leads to exponential decay would also involve the density of states and the density of modes is constant, which is excellent for certain times but which is of course violated at early times. And finally, if we hadn't done the infinite summation of diagrams, if we had done a perturbative expansion, we would have never obtained exponential decay. We would have obtained some polynominal decay. Questions about that? Yes. AUDIENCE: What is the polynominal decay? PROFESSOR: Some power law. 1 minus the time-- as far as I know, it would just involve powers of n. If you do lowest order perturbation theory, instead of getting an exponential decay, you would just get a linear slope. That's perturbation theory. And if you fix it, I think you get quadratic terms. So it's the sum of power loss. Of course, an exponential function has a Taylor expansion, which is an infinite sum of polynominal terms. And therefore, we need infinite order to get the exponential. So it's not really profound what I'm saying. It's pretty much an exponential function is non-perturbative. Other questions? So let me wrap up this chapter. What we have discussed here is-- I haven't really discussed resonant scattering. I've now focused on the function Gb. I focused on what happens to the state b. But this is-- and this is what I want to show you now-- the only element we need to discuss resonant scattering. So when we have an atom in the ground state, and a photon comes along, and it takes the atom to the excited state b, then we go back to the same state-- could be also another state-- by emitting a photon k prime epsilon prime. And the relevant matrix element of the time evolution operator, which is the T-matrix, involves now the matrix element, the initial energy minus the intermediate energy. And the critical part is really-- you can do it mathematically. I just show it here as a summary. The critical part is really the propagation of the state b, which is problematic. But we have now learned, and it transfers exactly to the light scattering problem, that we have to include now radiative shifts and an imaginary part for the decay to the time evolution. And that means that this diagram here for light scattering has been-- we have added other terms to it. And the other terms are, of course, that when we scatter light of an excited state like this, the excited state can sort of emit photons and reabsorb them. And it can do that to-- so we go to that state. It can do that to infinite order. So in other words, for any problem now which involves the excited state, we replace the 0's order propagation of the state b. And mathematically, it means we replace this function by the resolvent, which we have calculated by doing an approximation to the kernel R. Questions? AUDIENCE: Question, [INAUDIBLE]? PROFESSOR: Pardon? AUDIENCE: Is this [INAUDIBLE]? PROFESSOR: Yeah, OK, what happens is if you're off-resonant, you don't have a problem. This extra term delta and gamma, the radiative shift and the line widths only matter when the black term is close to 0. If you have a large detuning delta here, then the small shift in the line widths don't matter. So everything we have done by correcting the naked propagation of the state b by the correct propagation with this infinite emission and reabsorption of virtual photons, this is only needed if the denominator is 0. And then, we have to figure out what else happens. And what else happens is obtained in higher order with this non-perturbative treatment. Other questions? OK, 20 minutes left. So we now change gears. We move onto the optical Bloch equation. But let me give you one summary on this chapter of diagrams. Until maybe 10, 15 years ago here at MIT, we were not teaching that. And I felt often in discussion with students that a little bit more of a complete picture behind atom photon processes is needed. What I summarized could of course cover a whole semester course in QED and how to do calculation. And if you're interested in mathematical rigor, the green book, Atom-Photon Interactions, is pretty rigorous and still very physical. But on the other hand, many of you experimentalists, I think you should have sort of this picture behind it, what really happens, what kind of emission processes are responsible for which effect. And at least this is sort of for me a take-home message which I hope you can enjoy even without mathematical rigor, that the fact that the excited state has [INAUDIBLE] really comes from an infinite number of absorption, of emission and reabsorption processes. So you should maybe think about when you take an atom to the excited state that the excited state does more than just absorb the photon, the atom is excited, and then it emits. The real nature of this state is that it couples to many, many modes. It emits photons and reabsorbs them. And you can often neglect that in the simple description of your experiment. But if you take certain expressions seriously, they would have divergences. And that's what we discussed without this infinite number of processes which happen. Of course, yes, the whole other regime which I should mention-- and this is when you can completely neglect the coupling to many modes. If you do Rabi oscillation with resonant interaction, you don't need all that. Because then, you're really looking at discrete states. So it really also depends what you want to describe. If you do cavity QED with an atom, you have the Jaynes-Cummings model. And you have an exact solution. Then you have a similar mode problem. But here we have discussed what happens if you want to scatter light in free space. And then, you have to deal with the divergence of the excited state. OK, the next chapter is called Derivation of the Optical Bloch Equation. And yes, what we really need for a number of phenomenon in AMO physics for laser cooling, light forces, and much more are the optical Bloch equations. But what I try in this chapter-- to give you sort of the fundamental story, the profound story, the fundamental insight behind the optical Bloch equations. Because what the optical Bloch equations are is the following. We want to describe a quantum system. But the quantum system is coupled to the environment. And that means we have dissipation. And this is something which is not easily dealt with in simple quantum physics. Because a simply quantum system undergoes unitary time evolution described by a Hamiltonian. And unitary time evolution does not allow entropy to increase, does not allow energies to be exchanged. So a lot of things which happen in our experiments and in daily life come because the system we are looking at, it follows Schrodinger's equation. But it is coupled to a much bigger system. And so in this section, what I want to address at the most fundamental limit is, what is the step where we go from reversible equation, unitary time evolution, to something which is called relaxation, which is dissipative, where entropy is increased? And the step, of course, is we go from pure states to statistical operators. We describe the big system. But then, we focus on the small system. And I want to show you, first with a simple example, but next week in more generality, how this completely changes the nature of the description of the small system. A small system which I just described by Schrodinger's equation, now follows the density matrix equation, has relaxation, the entropy increases, and such. So to maybe set the stage, one of the simplest systems we can imagine is the Jaynes-Cummings model, that we have an atom in a cavity interacting with one mode of the radiation field. And in this cause in part one, we've dealt with it and looked at vacuum Robi oscillation and a few really neat things. But what happens is that the system is an open quantum system. You can have spontaneous emission. And if your mirrors are not 100.00% reflectivity, some light leaks out. So in other words, we have coupling to the environment. Of course, we can say we simply describe it. We have our atoms, maybe the one mode plus the one mode of the electromagnetic field. And then, we have the environment, which may consist of photons which are leaking out and photons which are spontaneously emitted. So this is our system. And of course, if you write down the total Hamiltonian and do the time evolution, something will come out which, in general, is very complicated, very entangled. The atom is entangled. The atom moving to the left side is entangled with a photon which was emitted to the right side. And the recall of the photons push the atom. So you have to know, to keep track of all the photons, which have been scattered in the lifetime of an atom. So you can do that. And you do that by propagating the system with its total Hamiltonian. But often, what we do is we simply put the photons in a trash can. We trash them. We're not interested in what the photons are doing. We're not keeping track of them. Actually, they hit the wall of our vacuum chamber, and we couldn't even keep track of them. The vacuum chamber has taken care of them. So all that we are interested in-- how do we now describe the atomic system? What, after the time evolution, is now the density matrix of the system? Of course, if we use the full description, we know the initial state of the environment. We know the initial state or our system. We propagate it exactly with the correct time evolution, we get everything. And then, we could reduce by doing a partial trace. We could reduce the description. What is now the probabilistic description for density matrix of the atom? But this is rather complicated. What we want to do is we want to do it as a derivation. But in the end, we want to have a formulation which simply tells us, what is the atomic density matrix as a function of the initial atomic density matrix? So in other words, all that happens with the environment-- that there's initial state, that it gets entangled, and then we neglect maybe not keeping track of the photons-- we're not interested in all of that. We really want to focus on what happens to the atom. How does an initial state of the atom propagate into a final state? And this is done by optical Bloch equations. This is done by the master equation. So the master equation, you can see, focuses on the relevant part of the system, maybe just a single atom. And we can neglect the million photons which have been emitted. But those million photons which have been emitted into the environment, they change, of course. They change this density matrix of the atom. And if you find a description, the master equation is including with extra terms what those photons have done. Maybe this sounds very abstract. But in the end, you will find that maybe photons which emitted produce some damping. Or if you put atoms in molasses, the atomic motion comes to a standstill. So in other words, we want to develop a systematic approximation scheme for the master equation whether effect of all these many, many degrees of freedom can maybe be simply expressed by a few damping terms. So that's the idea. So yes, you know already one example of a master equation. And these are Einstein's rate equations, which we have discussed in the first part of the course. If you have a two level system which is coupled to the radiation field, then we obtained equations that the rate of change of the ground state population is related to the excited state population through spontaneous emission described by the Einstein e coefficient. And if you have a spectral density of the electromagnetic field, it causes stimulated emission. And it causes absorption described by the Einstein b coefficient. And you have a similar equation for the excited state. So this is clearly the semi-classical limit of what we want to accomplish. But we want to know more. We really want to know not just, what is the rate equation for the atom? We want to know, you can see, the wave function. Well, wave function slash statistical operator-- the statistical operator contains all information of the wave function. And if the wave function is a pure state, it has a certain statistical operator. And we can find the pure state. So that's what we want to do. But sort of just genetically, we really want to find the full quantum time evolution. And now I just want to express that. We have to be careful. The time evolution as a Hamiltonian, if you now bring in the environment, cannot be simply included by adding an imaginary term. This here violates the unitary time evolution. In other words, when we find an equation for the quantum system, how it evolves, it's not that everything you think which phenomenologically works will work. It has to be consistent with the loss of quantum physics. In other words, when we find an equation for the statistical-- if you find an equation which describes the atomic system, it will be a requirement that a density matrix turns into a density matrix. So certain structure has to be obeyed. And that is actually extremely restrictive. And our derivation of the master equation will actually show what kind of operators applied to the atomic density matrix are consistent, are quantum mechanically consistent. This is actually something-- well, we're not doing a lot of equations or work today, so let me rather be a little bit chatty. This is actually something which is the frontier of our field, both in theory in ion traps and with neutral atoms, that we have some evolution of an atomic system by coupling it to the environment. Well, the usual environment you can think of is just taking photons away. It gives us a damping term of the excited state. But now you can ask the question, can you construct an environment which has some degrees of freedom with laser fields, RF fields, and such? You call it the environment, and this environment does something really fancy. And the system comes in equilibrium with the environment. But could you engineer the environment that the system comes into equilibrium with the environment, and it is in a fancy super fluid or fancy entangled state? So can you engineer the environment in such a way that it does something really fancy to your system? Well, you can dream of it. But you dreams are restricted by the mathematical structure of all possible master equations in the world. Because the environment cannot do everything for you. The environment can only do for you what can come out of all possible Hamiltonians. And the fact that the total system evolves in a unitary way with the total Hamiltonian of the system, this is really restricting. This operator sort of stands for all possible master equations. It's restricting the master equation for your atomic density matrix. There was just-- I think was it this year or last year-- a nice science and nature paper by [INAUDIBLE] where they engineered the environment around ions in an ion trap. And that stabilized the ion in the state, which was an unusual state, not what the normal environment would have done. So anyway, what I will be telling you is also sort of relevant to understand this sort of frontier in our field which is called environment engineering. OK, density matrix-- good, five more minutes. So what we have is we have a system. And we have this environment. And what we are exchanging with the environment is both energy, but also entropy. And so when we transfer energy or heat, there is a corresponding change in energy. And it's a general property of all quantum systems. It's a consequence of the fluctuation dissipation principle, the fluctuation dissipation theorem, that you cannot have any form of relaxation without noise. So for instance, when we discuss optical molasses-- we do that in a few weeks, where the atomic motion is damped-- well, you have a damping term. It seems it brings some motion through friction to a standstill. But we know by general principles that damping is not possible without noise. So when somebody sells you a wonderful damping scheme which damps the motion, you should always ask, but there must be noise. What is the ultimate noise? And it's fundamental that it is there. So therefore, our derivation of the master equation will also display that, that we do not get any form of damping without at least the fundamental quantum noise. So what we need is we need a description of the quantum noise, which comes from coupling to the environment. The tool which we use for that is the density matrix. I assume everybody here is familiar with the density matrix. The atomic lecture notes on the wiki have a small tutorial on the density matrix if you want to freshen up your knowledge about density matrix, maybe read about it. I don't want to cover it in class. The one thing we need, and I just want to remind you, is for all density matrices, you can always unravel it. The density matrix can be written as a probabilistic sum over states. This will actually play a major role. We will make certain models for damping. And it's really beautiful. On Monday, I will give you the beam splitter model for the optical Bloch equation. I really like it. Because it's a microscopic model. And it shows you a lot of fundamental principles. But the important part is whenever we have a way to construct a density matrix by saying, we have certain quantum states k, and we just add them up probabilistically, this kind of microscopic interpretation of the density matrix is called unravelling. It's sort of writing it as a specific diagonal sum over states. But those unravellings are not unique. They describe one possibility. But there are other possibilities. And the one example which I can give you is that if you have a density matrix like this, you can write the density matrix as this form suggests, as a probabilistic sum of being in the state 0, 0, or in the state 1, 1 with probability 1/4 and 3/4. But you can also write it as being with equal probability in two states a and b where the states a and b are superpositions of state 0 and 1. And you can see by inspection that this will do the trick. So we'll talk a lot about unravelling of the density matrix. That's why I want to say up front, that the same density matrix can be thought of as being created by different processes. But this actually makes it even more powerful. Because we have a unified description, or even an identical description, for different microscopic processes. OK, any last questions? Well then, let's enjoy the open house with incoming graduate students, and I'll see you on Monday.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
20_Fermi_gases_BECBCS_crossover.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Most likely this will be a two-hour lecture with a short break, so I am teaching until 3 o'clock and we have of the classroom until 3 o'clock. Let's jump right into cold fermions. Almost everything I say in this talk is some advice in a review paper which Professor [? Swiller ?] and myself wrote a few years ago. The paper will be posted to the group website later this afternoon. OK, when we cool down fermions, Lithium-6 and Boson sodiums we immediately notice a difference. The sodium cloud shrink, shrink, shrinks and forms a small Bose-Einstein condensate, whereas the lithium cloud stops to shrink when we reach the degeneracy temperature. And, of course, you know it's at low temperature. At high temperature the gases have very similar behavior, but at low temperature they look very, very different. And that can be directly observed. We know that bosons do something very special at low temperature and we talked a lot about it in this course. Namely, they become superfluid and form a Bose-Einstein condensate, whereas, for single-component fermions, there is some interesting physics in form of Fermi sea, but nothing really special. So, in order to do something more interesting with fermions you need two kinds of fermions. And two kinds of fermions can form pairs. To think, for a moment, two atoms can form a molecule and those Bosonic pairs can condense into Bose-Einstein condensate and becomes superfluid again. If that would be all to Fermionic superfluidity I would be almost done right now and would say, "OK a Bosonic atom is a composite particle made of nuclei and nucleus and electrons. Well, a bosonic molecule is a composite particle, but in the end they do both the same." But the special things about fermions is there is another form of pairing, which is much, much, weaker. And this is the form which leads to superconductivity of electrons. It's Cooper pairing. And this is really much more subtle because the pairs are much larger than the inter-particle separation. Now, in atomic physics, we have a wonderful tool to study pairing because atoms can form molecules, weakly bound molecules or tightly bound molecules. And we can control that in experiments with the Feshbach resonance. Assume you have two atoms, which collide, and they can form a molecule. And if you apply a magnetic field the two free atoms and the molecule may have a different Zeeman shift, and therefore, there may be a level crossing. And this is called, as a function of magnetic field, the Feshbach resonance. If the crossing isn't crossing nothing happens because the two configurations would not interact. So, physics happens when there are interactions for some hyperfine coupling, for instance, which takes the crossing into an anti-crossing. And now you see the following physics around this Feshbach resonance, that we have two atoms here. Here, we have a stable molecule. And there are many, many more molecular states down there, but for this restricted Hilbert space this is the lowest state. It's a stable molecule. The atoms, if you compare the solid line with a dashed line, attract each other because the solid line has lower energy, whereas, on the other side of the Feshbach resonance they repel each other. So therefore, if you look at the scattering lengths of the normalized force between atoms we find this disparity feature around the Feshbach resonance, and atomic physics wouldn't be the same without Feshbach resonances. We can now tune attractive repulsive interactions. In a richer situation there may also be Zero-crossings of the scattering lengths, which allows us to realize non-interacting systems and such. And Feshbach resonances were first observed at MIT some 15 years ago. So coming back to fermions. This Feshbach resonance gives us something very interesting regarding to pairing. On that side where we have stable molecules, those molecules are bosons and those bosons form a Bose-Einstein condensate of bound pairs. Over here, we have attraction. But the attraction is not strong enough to form a bound state, at least two atoms in vacuum would not find form a molecular state. The attraction is too weak. But we know from condensed matter physics textbook if you have many fermions, if you have a Fermi sea, then the Fermi sea plus attraction equals BCS Cooper pairs. So now we have an opportunity, by simply changing the magnetic field, to go from real, strongly-bound pairs to weakly-bound pairs, and eventually to Cooper pairs, which would not form pairs in vacuum. And this is a kind of physics, which was for the first time explored with ultra cold atoms, in the BEC-BCS crossover. Jenny. AUDIENCE:Just to clarify. So in the repulsive regime, you said the atoms form molecules, but the atoms repel each other. So you're saying atoms that don't form molecules repel each other or [INAUDIBLE]-- PROFESSOR: Yeah. Actually if I go back to that, here they form molecules, here they repel each other. And actually a lot of people think, the repulsive side of the Feshbach resonance means there is repulsion. But look, the atoms attract each other so much that they form a molecule. You can actually see the repulsion of the atoms comes because there is a very attractive molecular state below, and those two states repel each other. So the repulsion up here is actually caused by the molecular state down there. So in other words, if you go from the attractive to the repulsive side-- many people use it, in terms of pairs-- you go actually from very weak attraction, which is not sufficient to form the bound state, to stronger and stronger attraction, which now forms a stronger, and more strongly bound molecule. Mark. AUDIENCE: Are there ways you can tell whether the molecules are repulsive, or attractive just by physics, or [INAUDIBLE]. PROFESSOR: You mean what the force is between the molecules now? It turns out that the molecules are repulsive. In some picture here, when the molecule forms, you can see it's a loosely bound molecule. And the repulsion between atoms turns into repulsion of molecules. If you use an [INAUDIBLE] which has approximations, you would actually find that the scattering lengths for molecules is exactly two times the scattering length for atoms. But if you work it out correctly, the factor is different than 2. Colin. AUDIENCE: You said that the molecules are boson. I guess they're only bosons if the size of the molecule-- PROFESSOR: I come to that later. We come to that later, yes. AUDIENCE: My question was, I guess the size of the molecule scales pretty strongly with how far away you from resonance, C6 R6 balance for--- PROFESSOR: In the universal regime, which is open channel dominated-- sorry for talking shop-- in that regime, the size of the molecule is equal to one-half of the scattering lengths. So as the scattering lengths diverges, the molecule is very, very big. And as we move away, the molecule gets smaller and smaller. But once we get out of this quadratic regime, out of the universe of the halo regime, then, of course, the molecular size is completely determined by the close channel. And it is whatever it is in the close channel. And there's no relation to the scattering lengths. OK. So in other words, it is now possible using Feshbach resonances-- Yes. I'm just missing colors, those dots should be red. Do you have a color problem. The red has turned into black. At least on the monitor. That is much nicer. On the internal monitor it's red, it's just that the computer does not show red. Let me just try that. No, it's not cooperative. Just one more second. I have many more slides with color, so I just want to make sure that you recognize the features. I don't see any easy fix. So maybe we should live with just imagining some of the black is red. So in other words, by using a magnetic field, we can go from strongly-bound pairs, which form a Bose-Einsten condensate, to weakly-bound pairs, which resemble Cooper pairs of electrons in superconductors. So this is the physics of the BEC-BCS crossover, and in the middle of it we find a new form of superfluidity, which has not really been realized before. Namely, you can see these are molecules which are too big be to be called molecules. Or these Cooper pairs, which are too small to be called Cooper pairs. It's really pairs which sort of elbow each other. They just fill all the space. The pairs are exactly as large as the spacing between atoms. OK. In class I always try to teach you something conceptional. And the conceptional part I want to address now is, let's assume we are on the right-hand side, on the attractive side of the Feshbach resonance. Two atoms weakly attract each other with their negative scattering lengths, but there is no bound molecular state. So how should you now understand that if you have many fermions, that those fermions form Cooper pairs in pairs? And the question I would have for you, is-- definitely probably heard about it-- is this paring mechanism, which requires the Fermi sea Is that many-body pairing? That somehow correlation between many particles are necessary, and it's a genuine many body effect? Or can you still understand Cooper pairing as mainly a two-body effect? So in other words, how much many-body is really involved in creating pairs in the presence of a Fermi sea? Does anybody know the answer? What you hear from most people is that it's a real many-body effect. Two particles cannot found form a pair in vacuum. So it really many-body physics, and you say "Wow, I really have to understand complicated physics." AUDIENCE: [INAUDIBLE] PROFESSOR: Pardon? AUDIENCE: [INAUDIBLE] PROFESSOR: It has something to do with a Fermi sea. What-- [? Timor ?] first. AUDIENCE: I may know the answer. Maybe you would like to-- PROFESSOR: Take it away. AUDIENCE: There's always a bound state in two dimensions, I think, between both [INAUDIBLE] the Fermi sea helps you get the two dimensions. PROFESSOR: That's correct. So what I want to show you is exactly what [? Timor ?] said. You can really understand Cooper pairs. You can understand the exponential dependence on the scattering lengths by simply looking at one particle, at two particles, with a very weak potential. But you have to look at it in two dimension. And it is eventually the Fermi sea which provides a two-dimensional density of states. So in other words, the binding of Cooper pairs, at least qualitatively, can be understood with a single particle Schrodinger equation with undergraduate physics knowledge. Colin AUDIENCE: I guess the picture that was sort of when you think of PCS and a real solid, like a three dimensional solid, is that one electron traveling through the lattice repels all the electrons of similar spins [INAUDIBLE] exclusion. And then the electron of another spin moving with opposite momentum, sort of, sees that as a deficit of that spin and sort of attracted to that region. So in that sense isn't, sort of this effective clearing out of other particles to make room for [INAUDIBLE] many bodies associated with Fermi sea [INAUDIBLE] PROFESSOR: Very good. So we are talking about two different things here. One is we need an attraction. In the atomic system we get an attraction because of van der Waal's molecular forces, that's it. For electrons which have Coulomb charge, where is the attraction? And you wonderfully described that there is attraction between electrons. It's like the couch effect. If you sit on the couch, you indent the couch, and the person next to you will move towards you. Similarly, when an electron goes through a lattice it leaves a polarization of the lattice behind and the next electron is attracted to this trajectory. So what you are explaining is the phonon-mediated process which leads to attraction. But if I have two fermions which attract each other I don't need this complication. AUDIENCE: But couldn't you even get it without a lattice [INAUDIBLE] the Fermi gas. If I have one fermion, and I move it, the location that it was has been-- All the other fermions have been pushed away from that so there's a sort of a density depletion there. So couldn't the other spin component see that as effective attraction? PROFESSOR: Density depletion are density waves, and these are sound waves-- maybe there is a possibility to map it on phonons in the lattice. I haven't heard about it. So I've never heard about such an attraction which would happen in a Fermi gas, which would give the Coulomb repulsion for some correlation attractive component. What is well-described, of course, is the couch model or the phonon model you mentioned. But I'm saying, this is just giving you the attraction. But the question is now, and that's the next expression I raised, once we have an attraction between two particles, do we need many-body physics or single-particle physics to have pairing? So let me address that. Now this is out of the review paper, you could be much more about it. But it simply reminds you-- undergraduate problem-- what happens when you have two particles with a very weak attractive potential? And, of course, since you can eliminate the center of mass motion, when I say two particles it's the same as one particle attracted to a central potential. What you learn, or what could may have already learned in your undergraduate days, is that in 1D, if you have an infinitesimal depths of the square of L, you found a bound state. The bound state will decay very slowly, but it's a bound state. In two-dimension you also find a bound state. In two dimensions logarithm appears. Often when logarithm appears it's sort of the limit. The logarithm is the weakest form of binding. And if you go one step further, for a very small attractive potential, you do not have a bound state. So in three -dimension you need a minimum. You need a critical energy of the square of L-- critical depths before your bound state-- whereas, infinitesimal attraction is enough to bind particles in one or two-dimension. So therefore, when we talk about fermions with very weak attraction-- in 1D and 2D, no problem. They can form pairs by that, pairs even in vacuum. -- but, in 3D we need something else because an infinitesimal attraction is not enough to form a pair. Now let me ask you the following question. Let me just cover the slide because it gives the solution-. You all know that in three dimension, when you have radio symmetry, you can transform the three-dimensional Schrodinger equation into an angular equation, but then also into a one-dimensional equation for the radial wave function. And of course, this one-dimensional radial equation has the same potential as the original three-dimensional equations. So I don't know, I want to hear from one of you. Why does now an infinitesimal square of L potential with attraction give bound states in 1D, but not in 3D? Because in 3D, the radial wave equation uses the same infinitesimal potential as in 1D. Jenny. AUDIENCE:Because you have to break down an effective potential that takes into account the centrifugal PROFESSOR: Perfect, but let's use L equals zero. Then the centrifugal correction is zero. Otherwise you are right. It's an effective potential. But let's talk about L equals zero, S states. Then there is no correction due to the centrifugal forces. Colin. AUDIENCE: Radial derivative is no longer just a simple second derivative [INAUDIBLE]. PROFESSOR: No it is. You have a more complicated radial derivative, but then you go to an equation which just says D2DR squared over function U and it's exactly the one-dimensional Schrodinger equation, which you write down. You can transform the radial equation with a more complicated derivative into something which is exactly a 1D Schrodinger equation. Yes. AUDIENCE: You can [INAUDIBLE] relationship between [INAUDIBLE] PROFESSOR: Sure. Uncertainty of relation always can immediately help you if you have an uncertainty relation for for kinetic energy and you realize that you don't have enough potential, but maybe mathematically, what is different between the identically-looking 1D Schrodering equation and the radial Schrodinger equation for the wave function U? AUDIENCE: I don't know how this is going to lead to it, but I noticed one difference is that in in the three dimensional case you can't go less than zero. PROFESSOR: Exactly. In the three dimensional case you have a wave function psi, but then you do a substitution that the 1D Schrodinger equation is not for psi, it is for U. And U is the radial part divided by R. So you transform to a different function, which is one more power of R. And the requirement because of this is that your radial wave function of this one-dimensional radial equation has to have a node in the center. And now you see what happens. If you start at zero, your wave function shoots up. If you don't have enough attraction it will never curve down and become a normalizable state. Whereas, in 1D you do not have the requirement that the wave function if your 1D Schrodinger equation has a node. So you have the same equation, the same infinitesimal potential, but the radial equation has a different boundary condition. So that's the difference. And that's important. It changes the physics completely. OK. Now it looks very different and it seems you have to-- Niki. AUDIENCE: What about 2D? PROFESSOR: Limiting case, we come to that in a second. In 2D an infinitesimal potential is still enough. Now I want to show you how you can take care of all dimensions with one equation and not solve three different equations with three different kinetic energy terms. That goes as follows. Recognize that this is just Schrodinger's equation, kinetic energy minus energy equals potential. But now we can Fourier transform it. And the nice thing about Fourier transformation is that the second derivative, spatial derivative, is simply q squared. So therefore, if you describe now everything in Fourier space, you have q squared plus k squared times psi, but you divide by q squared plus k squared. Here you have a product. The product turns into convolution. Anyway, it may be not the most familiar way you've seen it, but this is just Schrodinger's equation in Fourier components. And now we want to simplify that we have a short-range potential. So we assume we have a short range potential. If you had a delta function, the momentum components of the potential would be constant for all R. But, we get into something unphysical if you don't put in a cutoff, so we put in a cutoff. But with that cutoff, you can now re-write the equation as follows. And now we want to do one thing. We want to integrate over q. Then on the right-hand side we also have an integral over q. And then we divide by the common factor. And what you get is this. I know I'm going fast, but this is nothing else than re-writing Schrodinger's equation for you. But now look at it. It has an integral over all energies with the density of states, or of E, of energy. E is k square. This is the energy of the bound state, and we want the bound state. And on the left hand side we have our infinitesimal potential. And one thing is clear, if you have an infinitesimal potential, we can only get an infinitesimally bound state. So now we do the following. If the left hand side goes to 0, If you go to an infinitesimal attraction, the left hand side diverges. And the question is now, if the energy goes to 0, does the right hand side diverge or not? In other words, if this integral, where you said E equals 0, does not diverge, you cannot fulfill this equation with an infinitesimal attraction in the square of l potential. So therefore, the condition for bound state is now that this expression here with E equals 0, diverges. And this combines now all the cases. And you have now an integral equation, and you can now solve it for infinitesimal V0. You can solve it, what is the energy as a functional of V0? And in two dimensions, when the density of states versus energy is constant, you have the weakest divergence on the right hand side. It's logarithmic divergence. In three dimensions, this doesn't diverge. In one dimension it diverges big time. Two dimensions is the limit. There's a logarithmic divergence. And if you do the math, which is elementary, but I'm not doing it, you find that the energy of the bound state depends exponentially on minus 1 over the attraction. So for infinitesimal attraction, this turns to E to the minus infinity. It's the weakest bound state you can imagine, and this happens in 2D. OK, let's go from one particle physics back to fermions. And let me now present to you how Leon Cooper really found the key ingredients to the long standing problem, "How do electrons pair in a superconductor?" And he did the following. He made an artificial simplification, which you can see is ingenious. He assumed the following. He just wanted to understand, how do two electrons pair? But he simply assumed that those two electrons are on top of a Fermi sea. So he pretty much said, "I want to solve the problem where two electrons just scatter through all available states and then figure out if an infinitesimal attraction between those electrons leads to binding." Of course, in the real world, the electrons in the Fermi sea are identical to those electrons, to those two special electrons, and they're constantly exchanged. But he simply made this artificial model. And then, of course, we can immediately write down the equivalent equation. It's exactly the same formalism, except for when we integrate over all energies here. We integrate, yes. We integrate over the three-dimensional energy. But now, we integrate only over a small region on top of the Fermi sea. And everything looks the same. We just have sometimes subtract the Fermi energy. All energies are now measured not relative to 0, but relative to the Fermi energy. If you compare it to the equation I just showed you, forcing the particle bound state, you pretty much see that everything is the same, except for energies are measured relative to the Fermi energy. And secondly, we're not integrating from zero, as before. We start integrating from the Fermi energy. And that means now, that in this area of integration, the three-dimensional density of states is constant, namely it's the density for where the energy is the Fermi energy and can be pulled out of the integral. And then we have the same logarithmic divergence as we had in two dimensions. So therefore, the problem of two electrons scattering on the surface of the Fermi sea means that we have essentially the same condition for the electrons as if they would live in two dimensions. Question. If the world were four-dimensional, and we would have the same physics, we have two electrons on the surface of the Fermi sea, would the physics of the two electrons, in that regard, be 4 minus one dimension, three dimension, or would it be two dimensions? AUDIENCE: [INAUDIBLE] PROFESSOR: Well, that's exactly the question. Let's assume you had four spatial dimensions. For this kind of analysis, it's the effective density of states on the Fermi surface. Now, two dimensional? Or is it three-dimensional? AUDIENCE: It should be three, right? You remove one dimension by restricting yourself the area in the Fermi sea? PROFESSOR: It is, if you say the particles have momentum on the Fermi surface, if the Fermi surface fixes one momentum, and now they have three different momenta. So in that sense, the motion on the surface in four dimension, is three-dimensional. But, it was sort of a trick question, to some extent, because here we have reformulated everything with energy. And the answer is now, the density of state is constant. It is the four-dimensional density of states, but evaluated at the Fermi energy. So therefore, no matter in what dimensions you are, because you are restricted to the Fermi sea, you integrate over energy with a constant density of states. And if you pull the constant density state out of the integral you have a logarithmic divergence. And you pretty much get the same answer. So, therefore the physics in four dimension would be the same. It would be the fermions which live on this Fermi surface would just marginally pair. AUDIENCE: Can you explain the reduction of dimensions? Where does [INAUDIBLE] PROFESSOR: Well, the motion, you can see on the Fermi surface is n minus one dimension. But the energetic analysis always uses a constant density of state. And the constant density of state is characteristic for two dimensions. So you have both. If you analyze the motion, yes, you have n minus one dimensions. But if you look at the question of bound states, you encounter the same situation as in two dimension, because the effective density of states is constant. OK so with that, the message I have for you is, Cooper pairing is a single particle effect. A single particle trapped to an arbitrarily weak potentially in two dimensions. And the only many-body physics which allows those Cooper pairs to form is, that you have a Pauli blocking, that you have Fermi sea, which prevents particles from going into the deep Fermi sea. They stay on the surface, and therefore, they form weak pairs. So the only many body physics, which we need for that kind of pairing is Pauli blocking. Mark. AUDIENCE: Sorry. Can you explain what is ER in the previous slide? PROFESSOR: ER was the cutoff. We were cutting off the potential at a short-- we didn't use a delta potential, we used a square of l potential, which extends out to R. And ER is the recoil energy associated with R. Its 1 over R squared, with a few h bars and n to give it the units of energy. Colin. AUDIENCE: If you go back to that, I can imagine Taylor expanding my density of states in any dimension around Fermi energy. I'll always have my constant term, and the first term is 3D, it's E to 1/2, or some linear term. Doesn't that imply that even if give the system any infinitesimal amount of energy, that integral no longer diverges and-- PROFESSOR: No. The question of divergence is robust. Depends only on the leading term and not on the expansion. If you have a divergence you are not changing it by a small correction term. Because a small correction term is on top of unity. If you start with the density of states at 0, which goes with epsilon to some power, you have a different situation. You catch really the physics by saying you take a constant density of states here. Anyway, let me just go on and say we should now figure out-- and this is also important when it comes to pairing-- which pairs are the most important ones. And the message I want to give you here, the pairs which are most important one, on the ones which have zero momentum. Because those pairs can scatter to all states on the Fermi surface without violating momentum conservation. However, if you have the total momentum of the two particles which is non-0, they can only scatter on the segment of the sphere. And you got already the message that the higher the density of state, the better it is for the binding. So therefore, the pairs which have the largest binding energies will be the one with total momentum zero. OK, now we go from Cooper to BCS. Those three guys figured out how they can now take the physics described by Cooper, and really fold it into a correct many-body formalism. In other words, not to artificially have 2 fermions, which scatter, and there is just the Fermi sea of the other ones. They are now treating the two electrons, which scatter, on equal footing with the whole Fermi sea. So, now we have democracy, all electrons are created equal. So I want to derive for you the BCS wavefunction in the way which was not the way done by Bardeen, Cooper, Schrieffer, but it makes an immediate analogy with Bose-Einstein condensation. , So what happens is you could now have the idea-- of course I'm very suggestive in leading you to this idea-- two fermions form a pair. And if the pairs both condense, then each pair would be identical. So maybe we should take our n particles, our n fermions-- n half spin up, n spin down-- each of them should form a pair. So one and two forms a pair, three and four forms a pair, and such. And then we see, our wavefunction of identical fermion pairs slash bosons is just the product of those pairs. But the pairing wavefunction, the molecular wavefunction if you want, so [INAUDIBLE], it's the same for all particles. So this is just like in the Bose-Einstein condensate. We have a product state of n times the same wavefunction. Here we have a product state of n over two times the same pairing wavefunction. We should anti-symmetrize that's important. Now let us see what we got. Well, we describe everything in second quantization. We use field operators which create particles at a certain position, and then we have the wavefunction of the pairs to make sure they are created in the right position. We transform everything our operators and do Fourier transform. We go from operators and positions space, to operators in momentum space. But now, it's just formalism. You can read up about it.. But the important thing is now I define a pair creation operator. This pair creation operator creates one of those pairs. And what it does is it creates. We want to focus on the pairs we zero momentum. It forms a plain wave fermion in the momentum state with minus k, with opposite spin with plus k. So these are sort of two opposite, plain waves with zero momentum. But now the leading factor is the Fourier transform of this pairing wavefunction. So if you want to assume it's a molecule or wavefunction, Fourier transformers, and this is the leading factor. And with that definition, this is now the creation operator of a molecule consisting of two fermions. OK. So now, we can say, well maybe a good idea of what describes our system is now, so to speak, a Bose-Einstein condensate, where we create n over two pairs out of the vacuum. Well, first I want to show you that this is actually the BCS wavefunction. Most people do not to use this form for the BCS wavefunction, but I want to show you mathematically identical the BCS wavefunction. But first I have to clarify is that really a Bose-condensate? If I would have shown you this equation without telling you anything about it, ya, this looks -- n over 2 identical, b, boson-- Bosonic particles. But now we have to be more careful. We have our definition for the pairing operator. And now we can calculate. Are those bosons? Let's just look at the communicator between b dega, b dega, bb. Yes fine fine. But now, the commutator between b and b dega is not 1. It would be 1 if the probability, that the population of one of those plain waves k, is much, much smaller than 1, then the commutator is full filled. But the population of momentum states depends on the size of the pair. If the size is huge, you have low lying momentum states, which have a huge population. If the size is small of the pair, you can say, in a tightly-bound system, particles have so much 0 point energy that every given momentum state has pretty much 0 occupation. So in other words, this criterion here, tells us that our pairs are bosons if nk is much smaller than 1. But if nk is not smaller than 1, then those pairs act out in a Fermionic way. And the criterion happens exactly when the size of the pairs is comparable to the particle spacing. So in other words, what we have described with this wavefunction is, we have a wavefunction which is a two Bose-Einstein condensate for tightly bound pairs. That's what we expect. It still has this form, but those pairs now feel the Fermi pressure. And therefore, this is automatically taken account in the formalism. Colin AUDIENCE: [INAUDIBLE] PROFESSOR: This one. AUDIENCE: [INAUDIBLE] PROFESSOR:It's a commutator not the anti-commutator. It's bb dega minus b dega b. I just want to be careful here because, for fermions, you often use the anti-commutator. But I want to check with these bosons, so I use the Bosonic, the commutator, not the anti-commutator. Let me just show you one thing real quick. Namely, that this wavefunction is identical to the famous BCS wavefunction. Well, almost, not quite. You know that this is a state which has n particles, n over two pairs. Remember when we talked about photons, we had Fock states with n particles, with n photons. But we also had coherent states, where alpha square was n. But this was sort of, the classic coherent state. And this coherent state could be expanded as a sum over Fock states. And one can now say, to some extent, when it comes to certain properties of light, n photons sometimes it's important to distinguish whether it be a Fock state or coherent state. But, for many things, if you have n photons in the same state, you can have a laser, then you have a coherent state, or you have n photons in a cavity. And that may be similar. So let me show you what happens when I take this Fock state, but construct what is now the coherent state version. In the coherent state version-- this here is a Fock state with j particles-- but now I take a coherent superposition, and I've chosen my factors such that I will get a coherent state. And indeed, I get the coherent state. This is now sort of, what the laser is for Fock state, is this coherent state for a state with a specific Fermion number. And this state now, when I just-- you can also say in many-body physics-- we say, I just went from the canonical ensemble, which has a fixed number of atoms to the grand canonical ensemb.e, where the number of atoms fluctuates. But now I can simply use the definitions. If I Taylor expand this component because all higher powers of the Fermionic creation [INAUDIBLE] operators give 0. The Taylor expansion ends after the first term. And with one more step, I have the famous BCS wavefunction. So therefore the BCS wavefunction, which has been widely used to describe superconductors, is really identical to creating n over 2 pass b, identical pass b, in the vacuum. And I'm just telling you that, because that means now, mathematically, the BCS wavefunction, which many people say describe a very, very different physics, is formally equivalent to the wavefunction for Bose-Einstein condensate. OK. So what have you done so far? We talked about one particle physics, how very weak attraction leads to bound states. We realized that if fermions live on top of a Fermi sea, it gives at least the binding, the energetic consideration, a two dimension character. And we have two states. And we have bound states for infinitesimal attraction. And now we have used our intuition. If those pairs exists, if they form kind of a Bose-Einstein condensate to construct a trial wavefunction. And now I want to take this trial wavefunction and show you how we can use it to solve the Many-Body Hamiltonian. So far, single particle physics, two particle physics, intuition to get a trial wavefunction, but now we go to many-body problem. Colin. AUDIENCE: Can you go to the last slide? So the BCS wavefunction has the normal component and the parody component. Where does the normal component come into the Bose-- PROFESSOR: This one? Oh we are talking here about t equals 0. We are talking about a wavefunction. We are not talking about finite temperature where you have distribution over excited states. This is really temperature physics. Was that your question? 100% superfluid fraction. OK, we are back now. [? Timor. ?] AUDIENCE: Just before you get into this. This maybe a silly question, but why is it that up has to pair with down. It seems ok for up to pair with up, right? PROFESSOR: No. What happens is, we have assumed here that we have short-range interaction. And up and up can never come close together because of poly exclusion principal. AUDIENCE: But then in principal, if you have key wave attraction, it could work. PROFESSOR: You're talking about now, different pairing mechanisms. I'm talking here about the plain vanilla BCS s-wave pairing. And that means spin up and spin down pair. Also, this is where I can work out the analogy with BEC-BCS cross-over and such. So I'm just giving you the simplest example. But yes, in helium-3 at high magnetic field, you have one of the phases where up and up pair for key waves. It's a triplet, superfluid. OK so we the Many-Body Hamiltonian, and what I want you to sort of, enjoy now is, I took you last class through the bosons. And you should maybe see how the Fermions are different, but we use similar concepts. And so you may ask yourself at the end of the day, "Are the fermions really so different from the bosons when it comes to super fluidity?" But we're getting there. Let's make the same approximation, we use the delta function here. And we Fourier transform. We go from position space to momentum space. But now comes one important approximation. Fermions are fairly complicated. Look I mean, we have products of four operators and we integrate over three momenta, Initial momenta, and to the initial momentum, the final momentum, and the momentum transfer. So we have an integral over three momenta. We have to do something, this is too complicated. And the BCS approximation is, the only focus in these sum of pair operators that we scatter, or we have. k prime minus k prime scatters into k minus k. So we pretty much save whenever we have, in the interaction term, two fermions. We are only looking at pairs, where the center of mass momentum is 0. You can say it's an uncontrolled approximation, but it's based on intuition. So the hope is that we still capture the essence of super fluidity by doing this major simplification of the Hamiltonian. So now I want to show you that this Hamiltonian can be solved in two ways. And this really connects me with things you have learned previously. One is, we can use as a variational ansatz these pair of wavefunctions which I just constructed. That's actually what Bardeen Cooper Schrieffer did, but they used, as a tidal wave function, the BCS wavefunction with the use and [? Weiss, ?] which is identical. But then I want to show you, partially because I'm feeling nostalgic about the bosons, that you can also use the Bogoliubov transformation. The same method we learned for the bosons. So variation on this just means use this BCS wavefunction, which I just showed you, plug it into the Hamiltonian, and all you do is, you vary the coefficients u and v until you find the minimum of the energy. And this gives you the famous BCS solution. And this is the plain vanilla formalism for superconductors in Fermionic super fluidity. OK I don't have time to go into details, but I think you've got the big picture. But let me know just emphasize the similarities we had with the bosons. We have a Hamiltonian, even with the BCS approximations that we got writ over 1, sum over momenta by focusing on the pairs with c [INAUDIBLE] mass momentum. You still have products of four operators. We want to get rid of it. But now we use a mean field idea. We always get rid of products of four operators by grabbing one or two of them, and saying, "we don't look at the operator, we look at the expectation value, which is the mean field." So we want to make a mean field approximation. But when you go for bosons more simply, you just took the operators, c zero, zero momentum, and replacing with the square root of n 0, the number of atoms in the condensate. But now we have to be a little bit more subtle. So what we should now define, is a pairing mean field. Not an expectation value of c 0, but an expectation value of this creational annihilation operators four pairs. A little bit more abstract, but it follows the same logic. And then we do the decoupling approximation, which I lectured about to you with a [? mod ?] insulator, that whenever we have products of operators, of products of pair operators now, we take the mean value, but we neglect the product of the fluctuations. Exactly what I've shown you before when we described the [? mod ?] insulator to superfluid transition. OK. This is the pairing field with an index k and eventually it's important to define a delta. The famous gap in the BCS, with the BCS COE, which is a sum over all pairing fields with k. And it is this delta which plays the role of the condensate wavefunction exactly the same way as psi, the macroscopic wavefunction in the [? Kospiesky ?] equation [INAUDIBLE]. So you do the same thing, we just write it down, we drop quadratic terms, which are fluctuation terms. And the moment we have now, simply bi-linear product of operators. We do exactly the same as we did with bosons, we transform to a new set of operators, which is the Bogoliubov Transformation. Of course, this time, we want to make sure that those transforming Fermionic operators, we want to make sure that after transformation we fulfill Fermionic commutator. And then we determine coefficients in such a way that cross-terms cancel out, and what we obtain now is a diagonalized Hamiltonian. It's diagonalized, it has the ground state energy. And this, you now realize, this is nothing like an harmonic oscillator. It's a quasi particle gas of independent excitations. Gamma k is an excitation with energy E K, and this means we've diagonalized the problem. I can't do justice to the formalism. I think the best I can do here for you is to show how the same ideas come back and solve the same problem. Also everything is now for fermions. Mark. AUDIENCE: [INAUDIBLE] Separate to [INAUDIBLE] At the very top of the line, you have [INAUDIBLE] then the next slide, you have the delta, which is the sum over k expectation [INAUDIBLE] sum. PROFESSOR: I derived it myself because it confused me. But I can just reassure you. If you take the original Hamiltonian, pluck this into here, and neglect the product of the smaller fluctuations, you get exactly this equation. It's really just accounting and regrouping terms. There is no assumption, no concept involved, it's really just plugging it in. The physics is that we neglect the correlation of the fluctuations. OK. Yeah, I think I want to wrap up the fermions. We still want to talk a little bit about ions, but we have now solved it. Either with the a variational [? onsets ?] or with the Bogoliubov Transformation. Two very different ways lead to the same solution. We have quasi particles. Here, we determined what the energies are. They are a little bit complicated equations to find what the quasi-particle energies are. But everything is known, we can now throw in temperature. Colin wanted final temperature. Well what happens with this Hamiltonian final temperature? Well, we just have occupation number for those quasi-particles, the thermal occupation, and we have to self-consistency. Now look what happens in the presence of quasi particles. This eventually leads to an equation for the critical temperature. So what we see here is a function of the inverse scattering lengths. Here we have a BEC temperature which is constant. And when we go on the BCS side, where we no longer form bound molecules, we have is exponentially decaying temperature for very small attraction. The transition temperature to the Superfund superfluid state is exponentially small. All this is theory, but it has been experimentally observed. If you want to pair fermions, we just need two states. At MIT we use mainly Lithium-6. Lithium-6 has many hyper find states, but we just use the two lowest one. In a high magnetic field, where we have a Feshbach resonance, we pair the particles-- well we needed the lasers and equipment-- and then we observe pair condensates. They look exactly like atomic condensates, but their pairs inside. That they're really pairs inside we found out with a f spectroscopy. And eventually when you locate the gases we find vortices. And you know already, vortices are the smoking gun of super fluidity. So with that I've rushed you through an important and very elegant chapter of physics, the BEC BCS cross-over. Let me just sort of end with an outlook that I've talked about basic phenomenon in bosons and fermions, both centered about super fluidity, because I think it was pedagogical to show you the differences and the similarities. But we can now regard those ultra-cold bosons and fermions as building blocks of quantum simulators, building blocks to understand, in its simplest manifestation, interesting physics. So you can see, the Bosic gases are quantum simulator for superfluid helium. The Fermi gases are a quantum simulator for superconductors. We briefly talked about optical lattices, which can be regarded as a [? quantum ?] simulator for crystalline materials. But there is much more you can do with those building blocks.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
14_Solutions_of_optical_Bloch_equations_Part_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. First, welcome everybody and good afternoon. So the question is about the master equation. We know is that the most general master equation has to have the Lindblad form. And the Lindblad form has a jump operator which is providing dissipation. In its simplest case, a jump operator is just the sigma minus operator for spontaneous emission, which takes us from the excited state to the current state. And when we have the Linblad form, the jump operator and it's commission conjugate, like sigma plus, sigma minus, appear on the right-hand side of the master equation with the statistical operator here, here, and in the middle. There are plus-minus signs in factors of two. So I don't have a intuitive explanation where the factors come from. They come from the derivation. But you have a question about it, Nancy? AUDIENCE: So after writing the Linblad form, we get gamma [INAUDIBLE], which we are saying for the r dot vector, we were writing [INAUDIBLE]. And we said that they are directly coming from the first one? PROFESSOR: Yes. So let's go right here. That's actually what we were going to discuss today. We want to discuss, today, solutions of the optical Bloch equations. And just to remind you, we derived a master equation for the density matrix. And this master equation, through the interaction with the reservoir, is acting a term to the Hamiltonian evolution. And this is showing right here. These are the interactions with the reservoir. And in the case of the optical Bloch equation and spontaneous emission, we have only one Lindblad operator, which is Lindblad operator, sigma minus, for spontaneous emission. So this here is the master equation for the two-level system interacting with a vacuum through spontaneous emission. And by substituting the two-level density matrix, has three non-trivial components-- sure, it's a 2 by 2 matrix, but the craziest one-- and so we can transform from the matrix elements of the density matrix to three other coefficient, which come in very handy, because they allow a simple geometric interpretation. And this is the Bloch vector. And so the master equation, which is shown here, turns into a differential equation for the Bloch vector. And those two equations are identical, we've just done those substitution. But your question is now about-- AUDIENCE: Plus gamma, because we have negative gamma over 2, which is from the first and last term. But the gamma term should be positive, right? PROFESSOR: This one here? AUDIENCE: No. PROFESSOR: This one? AUDIENCE: Yes. Because that's negative gamma over 2 times negative 2. PROFESSOR: No. I mean, first of all, this is a differential equation for r dot. OK. This is a differential equation for the Bloch vector. And we have here three different relaxation rates, which are gamma, gamma over 2, and gamma over 2. And they are all negative, because they are relaxation rates. This here is not a perfecter of r. It is sort of a constant. And it means, in the long time limit-- I mean, what happens if you have a system which spontaneously decay after a while? The r vector is not 0. The r vector is hanging down. The system is in its ground state. And this is exactly that. Wait. Am I r dot? Yes. The effect is the system, the differential equation, has to relax towards the equilibrium. You can actually re-write this equation, r dot equals a matrix times r minus r equilibrium. So here is the matrix of the optical Bloch equation. So if you re-write that that r dot is the matrix times r minus r equilibrium, then r equilibrium times the matrix gives something constant. And this is exactly this here. But it's a mathematical identity. Just look at it. There is no assumption. It's an exact re-writing of the equation. So it is this equation which will be in the focus of our attention not only today, but also when we talk about light forces. The first thing we did is we wanted to discuss the Mollow triplet. And just as a reminder, we have talked about, at the beginning of last class, we talked about the fluorescence. An atom is excited by mono-chromatic laser light. And at low intensity, a delta function emits exactly the same frequency of light which is provided by the laser light. So the incoming and outgoing photons have exactly the same frequency. And this is simply a consequence of energy conservation. But at higher intensity, we have sidebands. And we understand intuitively that we have sidebands because the system is Rabi oscillation. So same, classically, we have an emitter which has some modulation that creates sidebands. But is rather easy to understand is, by diagonalizing the 2 by 2 matrix, we find that we have a splitting of excited state with n photons and grounds it with n plus 1 photons. So we have sort of a plus and minus state. And if we look at the possible transitions, we immediately find the explanation for the Mollow triplet. There are three different frequencies which can be emitted in transitions between rest states. What is much more subtle, what cannot be obtained from a preterbative treatment, is the width, how wide are those peaks in the Mollow triplet. And there is tens and tens of pages in atom photo interaction. But at least, for two limiting cases, I could show you what the width of the peak is by first saying, if you have delta equals 0 and g equals 0, the matrix, which determines the dynamics, has three eigenvalues, gamma over 2, gamma over 2, and gamma. And now, if we detune, we add a rotation along z. If you drive the system strong, it corresponds to rotation around x. You probably remember, from 8.421 the spin in a magnetic field. It rotates. In a frame, rotating with omega, it rotates at the detuning delta. But if you drive it, we make it flip along the x-axis. So we find exactly that here. And then, it's an intuitive way to sort of matrix, if you want, solve the matrix mathematically. But intuitively, what happens is, if you have a z-rotation, we don't change in this eigenvalue. And the rotation is just aiding i omega-- omega is the rotation-- to x and y, because x and y are now rotating. So therefore, we obtained that, in the case of detuning the z-rotation, we have minus gamma, minus gamma over 2, minus gamma over 2. And here, what appears here is the rotation frequency. And then, if you rotate around the x-axis, then, of course, nothing happens, but what used to be the x-axis. So we get one eigenvalue with minus gamma over 2. But the y and z eigenvector are rotated. And hand-wavingly, I can tell you. But you can show it mathematically that this means the two new eigenvectors have the average of those two damping terms. And this is 3/4 gamma. So at least, for the two cases for large detuning and strong drive, I was, with a little bit of intuition, showing you and deriving for you where the non-trivial bits of the Mollow triplet comes from. So I hope you're, at least, impressed that these optical Bloch equations are powerful and allow us to make predictions which would be difficult to obtain otherwise. Any questions about that? About the Bloch vector, the differential equation, and the spectrum of the immediate light? OK, we want to now discuss one other aspect, which is actually fairly simple. We want to discuss steady state solution. For the steady state solution, we just go to the differential equation and say, the left-hand side is 0. That immediately gives us a solution for the Bloch vector. It gives us a Lorentzian. But for strong drive, it has a term here, which we will immediately realize that this gives us power broadening. And then the steady state Bloch vector is g delta, g gamma over 2, delta square plus gamma square over 4. So we can now use this solution for steady state and discuss everything we want what happens in steady state. So one question is, if you have an atom laser beam in steady state, how much light is absorbed? Well, the basic equation is, if you have a charged q which moves in an electric field, that means work is done. So therefore, the absorbed power is related to that. And we can now combine q dr/dt. And this is nothing else than the derivative of the dipole moment. And then we calculate the average. So with that, we find the result they the absorbed power is given by-- just one second-- e0? Yeah. We express now e0 by the Rabi frequency. And h bar appears in omega. But the important thing is that we need now the dipole moment or the derivative of the dipole moment. And the dipole moment is given by the coherences of the density matrix. And that is, if you look at the substitution, is the x and y component of the optical Bloch vector. But now what we need, of course, we need only the component of the dipole moment, which gives a non-vanishing term with cosine omega t. That means, what we need is-- I mean, that's like an harmonic oscillator. The component of the motion which is responsible for absorption is the one which is in quadrature, which is assigned only the d-component. And that, you have to go back and look by inspection. But this is exactly what the y-component of the Bloch vector does for us. It will become important later on for optical forces. So we have the optical Bloch vector. We are in the rotating frame. And the one component, which is the x-component, is in phase with the light. And the y-phase is 90 degree out of phases in quadrature. And for absorption it's, of course, the in quadrature component which is relevant and which determines light scattering. OK, this is the absorbed light. And we don't measure power in watt, we measure power in the number of absorbed photons. This is the natural unit here. And for that, we have to divide the expression by h bar omega. And that means now that the number of photons absorbed per unit time is given by the Rabi frequency and the y-component of the Bloch vector. Well, we can ask another question and ask, what is the number of emitted photons? Well, the number of emitted photons is related to the population in the excited state. We take the population in the excited state and multiply with gamma. This is spontaneous emission out of the excited state. Now, the excited state can be-- the excited state population, you have to look up at the substitution, but is related to the z-component of the optical Bloch vector. To remind you, the z-component is the difference of population between excited and ground state. But the sum of the two is one. So therefore, this is now the excited state population. And here is our steady state solution. Here is our steady state solution for the z-component. And by using that now, we find that we have the result with the Lorentzian denominator. And what we have here is gamma g square over 4. And now, of course, if you compare that with the solution for our y, for the number of absorbed photons, you find-- well, it shouldn't come as a big surprise-- that the number of absorbed photon in steady state is equal to the number of emitted photon. So this is one case, one important part of the solution. And I should have shown that to you earlier. When we look at the two components, the x and y-component, the y-component is absorbed, if it is Lorentzian. Whereas, the x-component is dispersive. Just by inspection if these two expressions, here is a delta. And the delta makes the x-component anti-symmetric. And the y-component is symmetric with respect to the origin. So what we see here is we have a Lorentzian. The Lorentzian has a natural line broadening by gamma. So the full width at half maximum is gamma, but only in the limit of 0 drive. When we drive it, we have an additional broadening, which is called power broadening. So for absorption, the full width at half maximum is 2 times gamma square over 4 plus g square over 2. And this last term goes by the name power broadening or saturation broadening. Just to avoid common confusion, if you go to very high power, that least term doesn't play a role. How does the line width scale with power in power broadening? Linear in power? Quadratic in power? Square root of power? AUDIENCE: Square root? PROFESSOR: Square root of power. Yeah. So in that sense, yes, power broadening only goes to the square root of power. It's, in essence, the Rabi frequency. If you drive the system at the Rabi frequency, it gets power broadened by the Rabi frequency. And I think that's a pre-factor which is 2 or square root 2. But in essence, power broadening scales with the Rabi frequency. So let us quickly discuss the case when we go to very high power, very high Rabi frequency. At that moment, you can just see it in the solution on the page above. The z-component of the optical Bloch vector becomes 0. That means there is no population difference between excited and ground state anymore. We often describe the limit of high power by introducing a saturation parameter, which is defined here. Saturation is discussed in more details in Part 1 of the course. But I want to relate it here to the solution with the optical Bloch vector. The z-component, which is the difference between ground and excited state population, now has a very simple form, 1 over 1 plus s. The population to be in the excited state is 1/2. That's what we get at infinite power. And then, if the power is finite, it is multiplied by s times s plus 1. In the power broadened line width is the natural line width, which gives the line width in the limit of low power, and then you multiply with the square root 1 plus s. So what does it mean to have a saturation parameter of 1? Well, if you just look how it was defined, if you have the ground and the excited state at a saturation parameter of-- first, at a saturation of infinite, you have 50-50 population. And at a saturation parameter of 1, you are 1/2 way to 50-50. That means you have 3/4 population here and 1/4 population there. So at this case, the number of photons emitted per unit time, which is always given by this formula, becomes now gamma over 4. So at a saturation parameter of 1, the emission rate is gamma over 4. At a saturation parameter of infinity, the spontaneous emission rate is gamma over 2. Any questions about steady state solution? [INAUDIBLE]? AUDIENCE: So what we derived right now is a spontaneous emission rate. Does the optical Bloch equation say anything about the stimulated emission? Or is that the fact that it just doesn't have [INAUDIBLE]? PROFESSOR: Stimulated emission is built in the optical Bloch equations for negligible damping contained Rabi oscillation. And Rabi oscillation is the stimulated emission. So this is being built in. However, it seems to be a describing the drive field by a classical field by a Rabi frequency. We are not really accounting for the number of photons exchanged with that. So in a way, the driving field is always treated in the undepleted approximation. It's a C number. But in the drive through system, but of course, you can immediately lead from the Rabi oscillation the exchange of excitations between the atomic system and the drive field. So yes, the optical Bloch equations contains that, but not in the photon picture. AUDIENCE: If you have Rabi oscillations, you can kind of guess that, every time you go down you simulated the events. In space, say, you don't have Rabi oscillations. Can you still somehow extract the rate? PROFESSOR: I hope the question we answer will become crystal when we discuss quantum Monte Carlo simulations. AUDIENCE: OK. PROFESSOR: In quantum Monte Carlo simulation, we have an ensemble of atom. And the atoms all do Rabi oscillations. But in steady state, they do it out of phase. So you have atoms in your ensemble which, at a given time, absorb. And others emit in a stimulated way. And the effect cancels out in steady state. Other questions? Yes? AUDIENCE: So you're saying, in steady state, they kind of de-cohere. And you don't have any net change in those populations. Do you still get the triplet lines if you don't have a modulation of your intensity beam? PROFESSOR: Yes. Absolutely. The Mollow triplet is a feature in steady state. And maybe let me explain that, because it's an interesting discussion. What will happen is, in steady state in your ensemble, you don't have Rabi oscillations. But if you would look at one atom, you see Rabi oscillations. It's just when you average over all atoms in your ensemble, the phase of the Rabi oscillation has averaged out. So in other words, for instance, what would happen is the following. If you take a steady state solution and you ask, what is the dipole moment as a function of time in steady state, you will find that this is 0. So you would then ask, hey, where is the light emitted, because isn't the light emitted by a time-dependent dipole moment? What you really have to do is, when you want to study light emission in steady state, you can't look at expectation values because they're not changing as a function of time. What you have to do is you have to look at the correlation function. And the correlation function sort of tells you-- let me just use it for Rabi oscillation. If one atom is in the excited state, a Rabi period later, it would be again in the excited state. So although you are in steady state, you have correlations because, if you look at your ensemble and see there is a fluctuation where the atom is excited, you will actually find a co-related fluctuation a Rabi period later. So in steady state, when you want to study dynamics like the Mollow triplet, you should not look at the solution for the dipole. You should look for the solution of the correlation of the correlation function. And I made this cryptic remark on Wednesday that this correlation function follows the same differential equation as the expectation value. Therefore, when I try to discuss for you the line width of the Mollow triplet, I simply use the matrix for the Bloch vector knowing that it is the same matrix, the same differential equation which will describe this correlation function. But it is important here that, in steady state, the average values are 0, but we have fluctuations. And it is the fluctuations which emit the light, the fluctuations which emit the Mollow triplet. And fluctuations are described by quantities like this. So it's a little bit beyond what I want to discuss in the book, but atom photon indirection has a wonderful chapter on that. Other questions? There is another important case of the optical Bloch equation, which is the weak excitation limit, which is simple. But I've already asked you to look at it in your homework assignment. OK, let's now come to the third aspect of master equation solutions of optical Bloch or master equation. And these are damped vacuum Rabi oscillations. Yes. I really like this example in this chapter, so I hope-- well, I think we have many highlights, but this is a really nice example. I learned it from Professor [? Schwan ?] when he introduced it. And what I like about it is I can use it to show you what other Lindblad operators may be important. So you suddenly understand, in a bigger context, what the master equation is. And then we continue with an atomic cavity. And the damping is no longer by spontaneous emission. The damping is by photons sneaking out of the cavity. So we really learn something which is similar to optical Bloch equation, but in another context. And often, if you see two different realizations of similar physics, you maybe realize more what is more generic and what is special for instance of spontaneous emission, so I really like that. But also, I found that this example allows me to introduce to you the concept of the quantum Zeno effect and the concept of atypical elimination of coherences. So it's a wonderful example which connects us with a number of really neat concepts. So with that promise, let me remind you that, in the master equation, when we derived it, we got an exact expression when we did second order perturbation theory. And the structure which came was this double commutator between the interaction operator and the steady state operator. Just to remind you, in the interaction picture-- first in the non-interacting, in the normal picture, the derivative of the density matrix is a commutator with h. In the interaction picture, it's a commutator with v with the interaction. But when we iterate an equation to second order, we plot the first result into the equation. Then we get, in second order, this structure. And if I, by expanding the commutator, we obtain the following structure. And that gave rise to the Lindblad form of the master equation, which is rho dot equals a Hamiltonian part. But then we have now a sum, v, the interaction with the environment, can be a sum of mu dot b and b-- and so an external b field times magnetization and dipole moment times an electric field. It can have a number of terms. And therefore, in general, we have more than one Lindblad operator. But the structure is always given by this commutator in the following form. OK. So we have this general derivation. But until now, we have only looked at a system which is a simple atom. And the environment was the vacuum. So this is our environment. And also, the environment was always in the vacuum state. So these were our assumptions. And that means that the only Lindblad operator, the only jump operator in this sum is sigma minus for spontaneous emission. And by inserting this jump operator into the Lindblad form of the master equation, we find the optical Bloch equation. But now let's do cavity QED. And there are lots of experiments going on in the research group of Professor Vuletic. And in this situation, our system is actually not just the atoms, it's the atoms and a single mode of the cavity. This is our system. And the environment are all other modes. So in other words, the atoms interacting with a single mode of the cavity, this is our system. And the environment is now accessed by emitting-- the atom is emitting. Or there are photons leaking out of the cavity, because the cavity mirrors do not have 100.0% reflectivity. I will immediately reduce it to one Lindblad operator. But we could now say that this system, in its full glory, has four different Lindblad operators which provide dissipation, which provide coupling to the environment. The first one is simply what we had before, spontaneous emission. However, if we allow that the environment is thermal, we have to multiply. Well, whenever you have emission and you have already population in a mode, and thermal is a number of thermal photons in every mode, on average, then you have an extra bosonic or photonic stimulation term. That's stimulated emission, but not by the laser beam. It's now stimulated emission by a thermal occupation of all the other modes. So just assume, for that purpose, that you have a cavity here. But the cavity is now in a black body cavity. And everything, all modes in the vacuum, are no longer vacuum modes. They are occupied within thermal photons. Of course, if this is the case, we have also the opposite effect. You can see by unitality, namely, that we can have-- oops, n should be-- but that we have an absorption process. We absorb a photon from the thermal background. But now, we have two more processes. And this is photons leaking out of the cavity. When a photon leaks out of the cavity, this leakage is not coupled to the atom, it's coupled to the mode. We describe the mode by a mode operator, a. So now, the leakage out with the rate, kappa, is presented by that. But in the case of thermal photons, we have stimulation by the thermal photons. And also, if we have thermal photons, photons cannot only leak out of the cavity, they can also leak into the cavity. So we have assumed here that we have n thermal photons in each mode of the environment. So in other words, if you want to look at, for instance, Serge Haroche does experiment in the wonderful experiments in the microwave domain. He has cavities at, I think, gigahertz frequencies. And even at cryogenic temperature, there are occasional thermal photons in this mode. But you are now expert enough that you would know how to set up a master equation for that situation. But I want to discuss simple limiting cases. So I think you are glad to hear that we are choosing 0 thermal photons. So therefore, we assume we have a cavity in vacuum. And therefore, we only need the Lindblad operators, L1 and L2. And we want to describe the system which has only one x, which has, maximally, one excitation. We try to keep the density matrix small, so we only want to consider three states. So when I said there is one excitation, it is, of course, the state which has no photon in the excited state. The state two is we have the ground state and one photon. And those two states are coupled by the cavity. And since I really want to discuss the simplest case which already illustrates all the concepts I want to do is they are coupled by the vacuum Rabi oscillation. And we have discussed vacuum Rabi oscillation in part one of the course. But it's very simple. It's just two levels coupled. It's a 2 by 2 matrix, so this is something you are familiar with. But now, the master equation, the Lindblad operator bring in the third level, which is the ground state without photons. And this can happen in two ways. One is spontaneous emission. The excited state emits, but not into the cavity. It emits to the side. So spontaneous emission can take us down here, or when the excitation is in the cavity. We have one photon in the cavity, this photon can leak out. And the rate that is kappa. And since we have talked about spontaneous emission so much with the optical Bloch equation, we want to discuss now the case where gamma can be neglected. And we want to understand what happens when the only dissipation of the system comes by photons leaking out of the cavity. Any questions about the system and the motivation? So our system is now an atom with a cavity. The environment is a vacuum. But the process of dissipation relaxation is no longer spontaneous emission, it is the photon leaking out of the cavity. And this realizes different physics. And I hope, by experiencing these different physics, you have sort of a nicer picture what is dissipation in an open system and what is similar, but what is also different or spontaneous emission. [? Koren? ?] AUDIENCE: If you're considering only kappa, isn't there an equivalent process of order kappa that takes you to ground of two photons in the cavity? Don't you have to consider that state as well? That term would be of order kappa as well. PROFESSOR: So great. The question is I restrict the Hilbert space to maximally one excitation. So this is how we start out with our differential equation. AUDIENCE: Oh, this isn't in a thermal bath? I thought we were assuming we had a-- PROFESSOR: Oh, sorry. I first made everything complicated. But then I said, now, let's make it simple. We assumed the thermal bath is 0. So we reduced-- In a way, yes, I told you here are four Lindblad operators, and you can describe everything you want. But then I said, hey, two of them become 0, because we eliminate the thermal bath. And now I make the approximation that spontaneous emission is negligible. And now we are back to the situation which is nice for classroom discussion. We've only one term left, and this is the leaking of the photon out of the cavity. So I will show to you what happens. But ultimately, when we make the cavity better and better and kappa smaller and smaller, I will say, wait a moment. We now have to check that our solution is still consistent with our assumption that spontaneous emission can be neglected. So that's sort of what you want to do. Yes. Let's just get two more pages. OK. So what we want to do is we want to look for the dynamics of the system. We start out in state one. So we inject one excited atom into the cavity. And what we want to learn now is-- and it has a lot of interesting physics in it-- what will happen as a function of this new dissipation, kappa. Well, qualitatively, it should be pretty clear. If kappa is 0, we have vacuum Rabi oscillation between state one and state two. If you then put a little bit kappa into the system, you have a Rabi oscillation, but they are getting damped. And like in any oscillator, if you crank up the damping, you go into an overdamped regime. And this is exactly what the equations give us, Rabi oscillation, damped Rabi oscillation, and then overdamped regime. And actually, the most interesting regime for us will be the overdamped regime. That's something we haven't encountered. But let me just, before we go there, write down the master equation for you. So without the photon leaking out of the cavity, we have the following terms. Without the photon leaking out of the cavity, of course, state three is not involved. We simply have Rabi oscillation between state one and state two. And this is actually also like the optical Bloch equation. Without damping, it's simply the Jaynes-Cummings model. Minus, that's this. And for the coherences, we get omega 0 rho 1, 1 minus rho 2, 2. But now, we put in the terms with kappa. The state, which is the state two, which is the ground state with the photon, has now a damping term, because the photon can leak out. We get, of course, an equation for the ground state, the population, the photon leaks out. And we populate state three. And the damping also affects the coherences. And the form of the master equation, the Lindblad form, gives us the following damping term. So there are two new concepts which I promised you to introduce. One is the idiopathic elimination of coherences. And the second one is the quantum Zeno effect. So first of all, if you look at the differential equation, and when kappa is small, in particular, kappa is smaller than 2 times the vacuum Rabi frequency, then the population to be in the excited state, which is rho 2, 2, undergoes Rabi oscillation. And those Rabi oscillations, this population is damped by e to the minus kappa t. So what I want to discuss now is the overdamped case. And actually, before I derive it for you, I wanted to ask you some clicker question. I'm afraid I put the books out, but somebody put it away. So nobody took clickers? So why don't we just pass them around? And I can already formulate the question. I think you know by now that I have a preference to maybe ask you questions about really simple quantum physics. Quantum physics is sadly enough that nobody fully understands it. And we want to improve on that. So I want to test your intuition by looking at the situation. We have an excited state without photon. We have a ground state. So I want to test your intuition in the following way. We have Rabi oscillations between two levels. The case in the cavity is, it is Rabi oscillation between state two and state one, but it doesn't matter. You can also assume that it is Rabi oscillation between hyperfine levels. And the Rabi oscillation is driven by an RF field. And the one thing we introduce now is we introduce damping kappa to this level. But we start out the system with all of the population in this state. And in the overdamped case, when kappa is sufficiently strong, then we know an oscillator will be overdamped and will no longer oscillate. And in this situation, the probability to be in the initial state will decay. e to the minus kap-- e to the minus-- how should we call it? gamma t. So it decays with the decay rate kappa. And my question for you is, A, B and C, whether the decay rate, kappa, is proportional to the damping rate, gamma? Proportional to the inverse of gamma? Or independent of gamma? So we've exacted this system. The system has possibility of Rabi oscillation. I showed you the damped Rabi oscillation and in the limit of small kappa. But now you go to the overdamped regime. And my question is, if the system is overdamped, will this decay rate increase with kappa? Decrease with kappa? Or will it be independent of a damped regime? All right. OK. There's a clear neutrality. Let me not yet discuss the answer. Let me maybe try to give you another problem. And you should maybe figure out if those two problems are related. I want to ask you now something simpler. There are not three levels involved, there are two levels involved. But one of the level is broadened by kappa. Call it an unstable state. Because it can decay with the rate, kappa, and we often show something which is broadened. And now we have a matrix element between the discrete level and the broadened state, which I call omega. It's a matrix element or Rabi frequency. And now you should figure out what is the physical picture which describes the coupling of a discrete initial state to something which is broadened. It's now the transition rate from the original state. Does the original state decay with a collectivistic rate, which is omega? Or is it omega squared over kappa? Or is it none of the above? So coupling between two levels, very, very simple. But one level is broadened. OK. Yes, it's Fermi's Golden Rule, what I'm asking you. This is nothing else than Fermi's Golden Rule. We couple from one level into some continuum of states. If I take one state and smear it out over a width, kappa, the density of states, the density of modes which appears in Fermi's Golden Rule, is 1 over kappa. And Fermi's Golden Rule tells us that the coupling is the matrix element squared times the density of states, which is 1 over kappa. So this is nothing else than Fermi's Golden Rule. OK. Why don't we come back to the first question now where we have two discrete levels, maybe two hyperfine states, which are coupled by a Rabi frequency? But then one of the hyperfine states decays because there is some leakage, or we-- I don't know-- we interrogate this state with laser beams or such. We make it unstable. We allow this state to decay to another state. And the question is, if we make the decay of that level stronger and stronger, does the decay of the initial state increase? Decrease? Or is of that? Good. So what you have learned here is that, actually, the more we damp the system on this side, the longer-lived is the initial state. But it's trivial. It's just Fermi's Golden Rule. When you couple into a level which is broadened, the coupling becomes weaker and weaker because, well, that's what Fermi's Golden Rule tells us. I want to now derive this situation for you for our atom in the cavity. And I will say that this is one example for the quantum Zeno effect. But let me already explain to you what the quantum Zeno effect is. Zeno is a Greek philosopher. And the Greek philosophers had some deep thoughts about the nature of time, the nature of motion. And I think, in the Greek philosophy-- sorry. I didn't go-- I mean, I don't have a strong background in philosophy, but I think the idea is that motion is when something moves around. But it takes some time to move. So if you observe it, it can't move, because every time you observe it, you localize something. That's the idea. If you see an arrow, the moment you observe it, it's localized. And if you localize something to often, it cannot move. And this is sort of what you can say here. The system wants to evolve to a second state. But kappa can actually be a measurement process. We can just shine a strong laser on this state and figure out has something arrived here. And the more often we look, the stronger our measurement is, the less the system can evolve. So a measurement of what arrives in this state slows down the dynamics to that state. And this is called the quantum Zeno effect. It also goes by the name of quantum Zeno paradox. But it's just quantum mechanics, it's not a paradox. In the popular world, it is sometimes paraphrased as if you observe the tea kettle, the water never boils. But you see now where it comes from. OK, so this is the quantum Zeno effect. And we want to now, by looking at the master equation for an atom in a cavity, I want to show you the limit that it really comes out of the master equation. Questions about that? OK, so this is the quantum Zeno effect. And well, one of the nicest papers on the quantum Zeno effect-- well, I'm biased here-- but it was written by my group, because we used the Bose-Einstein condensate. We had an RF coupling to another hyperfine state. And then we used a laser beam and observed the population in the final state. And we saw that the effect of the RF drive became weaker and weaker and weaker, the stronger we made the measurement in the final state. And this is the reference. And our work was the first quantitative comparison. You can now do a measurement by just using a strong laser in, maybe, every millisecond or so. Or you can use a weaker laser beam continuously. So we showed in this work that the quantum Zeno effect is the same, whether you do a pulsed observation, which is a strong measurement, or continuous weak measurement. OK. But let's now, after this short interlude, go back to the master equation. So our job is now to solve this master equation for the limit of strong damping. And what we will find, actually, is we will find both the quantum Zeno effect and another tidbit. We will also find the Purcell effect, which is enhanced spontaneous emission to a cavity. Now the way how we solve this master equation in this limiting case is by the method of idiopathic elimination of coherences. And let me explain it to you with that equation. What we have here is the derivative of the coherences. And when you look at the right-hand side, the derivative of the coherences is kappa, a damping terms, times the coherence. Well, if you have an equation which says a dot equals minus strong damping times a, you have a very rapid exponential damping. The system immediately gets into an equilibrium. And the equilibrium is given by the first term. So if you have a situation that the population do not rapidly change, then this equation can be written that we have a very rapid damping. And ultimately, the coherences settle to an quasi-equilibrium value in a time 1 over kappa, which is given by the first term. In other words, you set the time derivative of the left-hand side, 0. And then the coherences are expressed in terms of the slowly changing quantity, which are the populations. So let me repeat. If coherences are rapidly damped, we can neglect the time derivative of the left-hand side. And at any given moment, the coherences will be given by an expression which involves the population. And if the populations slowly change, the coherences will slowly change. Or in other words, the coherences follow the population. And the lag time is, at most, 1 over kappa. And we are in the limit of strong kappa. This principle of eliminating coherences and we simply get a master equation for population is called the idiopathic elimination of coherences. It's also called by Eric Hargan in his famous work about synergy by the principle that, if you have rapidly damped modes, they are slaved by the slow modes. In other words, the rapidly damped degrees of freedom follow the slow degrees of freedom. And this is what I just said, that the instantaneous value of the coherences is always given by the population. So let me just write that down and see where it takes us. So we have the situation that we want to now look at the case of strong damping. This is the overdamped case. And we use now the method of idiopathic elimination of coherences, because they are rapidly damped. And there is a wonderful discussion about this method in Atom-Photon interaction. It's the Exercise 18 on page 601. So mathematically, what we assume is that the population vary slowly. For instance, they vary at the Rabi frequency. So therefore, we can say that over the rapid time evolution of the coherences-- rho 1, 2 plus rho 2, 1-- so this is the rapid time evolution of the coherences. But for times which are on the order of 1 over kappa, the population are not changing. The population change over the much longer time scale, which is the period of the Rabi oscillation. So therefore, for the short time scale, I can sort of pretend that this term is constant. And for obvious reasons, let me now call this constant term, or slowly varying term, it is, at least for short time scales, the equilibrium value. So the way how we should read it is that the coherences will very, very rapidly damp to this equilibrium value. And this equilibrium value was given by the population, which may slowly change. But that means, now, by eliminating coherences, we obtain a rate equation for the populations only. So therefore, if you neglect short transients over the short time 1 over kappa, the coherences are always given by this quasi-equilibrium value, which is expressed by the population. And therefore, we can go to our master equation, which was a master equation for the of diagram matrix elements of the density matrix, and we can now replace the coherences by an expression which involves only population. So we get now closed equations for only the population. I'm sort of emphasizing that because you may have asked yourself, when you have the rate equation a la Einstein, with the a and b coefficient, these are just rate equation for populations. But here, in our class, we've always talked about the density matrix, how important the coherences are. The coherences are important. Without coherences, you would never absorb or emit light. But if the coherences can be expressed by populations, you don't need a differential equation for them. OK. So with that now, we have our master equation for the population. And by just inserting that, we find this. And we were interested in the decay of rho 1, 1. Or this is also the population of the atom in the excited state. If we initially start where rho 1, 1 is large, we're interested in the initial decay, then the population in the ground state is small. And we find, indeed, that there is exponential decay of the initial distribution. And the decay rate, gamma z, gamma cavity, is given by this expression. So therefore, what we find is-- but you know it by now-- the effect that the stronger we damp the cavity, the smaller, the weaker, is the decay of the initial state. And I've already explained to you that one picture to explain that is that we have the excited state, we have the ground state. The ground state gets damped by kappa. And therefore, the time evolution, which is given by the vacuum Rabi oscillation, slows down. OK. Yes. Any questions? So we have now resolved for gamma cavity. We have a result how the atom decays in the presence of the cavity. And I want to rewrite this for you in a very nice way. Namely, I want to show you the famous result that, when an atom emits in a cavity, that the atom can decay faster than the spontaneous decay. Because if the cavity has a resonance, the cavity changes the vacuum around the atom. It increases the density of modes. And therefore, there is a stimulation of the atom into the cavity. So I don't have to do any other math than just rewriting our result of gamma cavity in other units, in other ways. And then we can compare it to the natural decay of an atom without the cavity. So what we want to do now is we want to take the cavity-induced decay of the excited state, and we want to compare with the natural decay rate, gamma. So I just have to rewrite things in a few units. The cavity-induced decay is the vacuum Rabi oscillation squared over kappa. So let me remind you that the Rabi frequency is always given by an dipole moment times electric field. It's a matrix element. Well, there's one of those factors of 2. And H bar keeps the units correct. And in the case of the vacuum Rabi oscillation, the electric field is the electric field, so to speak, of one photon in the cavity volume. And there is a factor of 2, so this is now the volume of the cavity. So therefore, the vacuum Rabi frequency is a dipole moment times 2 atomic resonance frequency epsilon 0 h bar v. And that means now-- and this is a nice result-- that the cavity-induced decay is omega 0 square over kappa. And I just use the above result for omega 0. And for kappa, I introduce the Q factor of the cavity, which is the number of oscillations before it is damped. It's omega over kappa. And with that, I can rewrite the cavity-induced decay as 2d squared epsilon null H bar v times Q. And if you remember, from part 1 of the course, or standard textbook result, that spontaneous decay is a dipole moment squared times frequency to the power 3, then we obtain the famous result, which is called the Purcell factor, namely that the decay in the cavity, compared to the decay in free space, is proportional to the Q of the cavity. There is a numerical prefector. But then, what enters is the wavelength of the photon cubed over the volume. So the case of eta larger than 1-- and it's actually interesting that this result is completely independent of atomic properties. So in other words, every atom you put in a cavity, no matter what its dipole moment is, it will decay with a rate which is Q times larger. There are many ways how to look at it. You can say, if the cavity consists of two mirrors, you create Q mirror images of your atom. And you have sort of an atom which has a Q times stronger dipole moment. And therefore, because the mirror images also radiate, and therefore you get a Q times faster decay. So this limit makes sense in many, many different pictures. And it's a general result that, if an atom emits in a cavity with finesse Q, you get a Q times enhancement of spontaneous decay. So this is called cavity-enhanced spontaneous emission. It goes back to Purcell in 1946. And you can see the whole field of cavity QED depends on this effect, that you can enhance spontaneous decay. Or it's the opposite. If an atom has a resonance frequency between two modes of the cavity, the spontaneous decay is inhibited. And this is sort of what is exploited in cavity QED. Let me just conclude here by saying that our assumptions have to be consistent. In our derivation, we neglected gamma. When I said the two Lindblad operators, we only look at the leakage out of the photon. We neglect spontaneous decay. So of course, this requires that eta is larger 1. So this is the consistent range of validity of our result. Secondly, we discussed the limit of overdamping, which requires that kappa has to be larger than the vacuum Rabi frequency. And therefore, the above result is valued for large Q. Q has to be sufficiently large, but cannot be larger than this value. Any questions? Nancy? AUDIENCE: For kappa, is that the loss in the cavity-- is there any difference between the loss and the cavity because it's a cavity? Or the loss is into other modes of the cavity? PROFESSOR: Kappa is the loss. You can say kappa is the rate at which a photon in the cavity is lost. AUDIENCE: Is lost from the cavity totally? Or is lost into another mode inside the cavity? Is there any difference? Is v here just one mode inside the cavity. PROFESSOR: Well, we have not an element-- OK, if the mirror has imperfections, we usually think of light leaking out of the mirror. If the mirror is absorbing because the coating has a tiny little bit of absorption, you know, from the beam splitter analogy, that a small absorption is equivalent here to a small transmission factor. Usually, we don't consider that the light transforms from one mode of the cavity to the other. It's either absorbed or leaking out. But if you assume that the mirrors are not super polished and they scatter light a little bit, then you would have a mechanism that the light in the cavity scatters into other modes. And all those losses are summarized in one coupling constant, kappa. Let me finish the class today and this week by showing to you the recent nature paper by the Innsbrook group. We've talked so much about Lindblad operators. And the Lindblad operators, the dissipation, tells us what is the effect of the environment. And usually, you would say, well, if you have an atom, and it interacts with the environment, what is the equilibrium state? Well, if you just have the vacuum, the atom goes to the ground state. And the ground state is the attractor, the stable state of the system. If you drive the system with one laser beam, the attractor state where the system goes into is the steady state solution of the optical Bloch equation. That all sounds a little bit boring. But in this paper, based on this theoretical suggestion of Peter Zola and an experimental realization with trapped ions at Innsbrook, they actually engineered an environment, engineered Lindblad operators in such a way that the system of two ions was relaxing into a Bell state. So you engineer now the environment that therm-- I wanted to say thermal equilibrium, no-- the equilibrium state with the environment, the dissipation, leads to a Bell state. Well, it's a little bit complicated how it is done, but I just want to just look at a few highlighted key messages. So the idea is, here, engineering of dissipation, creating experimentally dissipative operators. We usually manipulate quantum system with Hamilton operators. But here, it is manipulated not by the H operator, but by the L operator. And so the idea here is to have an evolution of the density matrix, a linear mapping that the density matrix evolves in a certain way. I think you'll recognize partial trace. You'll recognize a few equations, a few results which we have discussed. And what you see here is exactly the Lindblad form. And what they did is, by using a number of laser beams in quantum operation, they designed operators, z, jump operators, Lindblad operators, in such a way that the system was damping out into a Bell state. So that seems a new frontier for research. It goes by the title that we can do quantum simulations, using trapped ions and cold atoms of Hamiltonians. But we can also quantum simulate very special environments. Maybe environments which it would be hard to find them in nature, but they are quantum-mechanically allowed possible environments. And they may have new features which have not been explored so far. I will add this paper to the website, if you want to read more about it. Any question? Then a reminder. No class next week. Monday is Patriot's Day. I'm out of town on Wednesday, but you have a homework to solve. I rather feel you should continue to have a homework assignment every week, because then you have more time at the end of the term for the term paper. I always have 10 homework assignments. So when we spread them out more, there is less time left at the end of the semester for the term paper. So I decided, since we have covered enough material, we have a wonderful homework assignment for next week.
MIT_8422_Atomic_and_Optical_Physics_II_Spring_2013
4_Nonclassical_light_squeezing_Part_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Well, we are in the middle of something really interesting. We are talking about squeezing. We are talking about non classical light. And today, I sort of want to wrap it up. And I think it will really be an exciting class. But before I continue with the material, I want to address a question, which actually came up in discussions with several of the students. And this is, I realize that some people said, OK. Everything makes sense. But what are we plotting? What is really squeezed? Are we squeezing in the spatial domain? Are we squeezing in the temporal domain? So the plots look wonderful, with these ellipses and the circles. But, what is it really we are doing here? So let me address that. First of all, we are talking about a single harmonic oscillator. We showed that the [INAUDIBLE] equations can be reduced to a bunch of harmonic oscillator equations. One for each mode. And now we are talking, today and in the previous few classes, about one single mode, about one harmonic oscillator. And the harmonic oscillator has canonically variables of momentum and position. But this is just to make a connection for you with something you have learned. What we are talking about is a single harmonic oscillator, which is one single mode of the electromagnetic field. So maybe let me draw a cartoon for that. So let's assume we have a cavity. We have an electromagnetic wave. There is propagation, where it's e to the ikz. There is transverse confinement. Maybe there is a Gaussian e to the minus x squared plus y squared over sum [INAUDIBLE] parameter. All of that is simply the spatial mode. And we just take that for given because we're not solving the spatial differential equation. All we are doing is we are looking at this one mode. And the two degrees of freedom is that this mode can have a certain number of photons. It's the amplitude. And the second one is you can see the temporal phase. It can be a cosine omega t. it can be a sine omega t. It can be a superposition. But whatever we are talking about is in this mode. There is nothing happening in the spatial domain. They're just asking, what is the oscillation in this mode? The whole mode does what it should. It has a prefect, of which is the amplitude. And it has a temporal effect of which we factor out. And this is what we are talking about. Let me be a little bit more specific and say, that when we are plotting things, we are plotting the Q representation, the phase space representation of the statistical operator, which is simply describing this single mode of the harmonic oscillator. And by performing the diagram matrix elements, we obtain the Q distribution. In that case, we have the vacuum. We have a displaced vacuum, which is in coherent state. And our x's are, from the very definition, in the complex plane with a real part of the imaginary part of alpha. However, we can also define [? the Veetner ?] distribution, which is another phase based distribution. It's almost the same as the Q distribution. It's just this little bit smeared out by h bar because of some commutators. But nothing you have to worry about. In that case, the projection of the W function on the vertical axis, on the y-axis, is the momentum wave function squared. On the x-axis, it is the spatial wave function squared of the harmonic oscillator. So therefore, we may sometimes think, when we have a distribution here, we project it. And we see what is the momentum distribution. Or what is the spatial distribution of the mechanical harmonic oscillator. Which is analogous, which is equivalent, to the one mode of the electromagnetic field we are using. I know it may help you to some extent to think about the P and Q. But it may also be misleading because it gives you the impression something is moving with a momentum P, in real space. Let me therefore emphasize what are the normalized forms of P and Q. If I do the symmetric and anti-symmetric combination of the annihilation and creation operator, I over squared root 2. I call those a1 and a2. And they are nothing else than Q and P normalized by the characteristic momentum of spatial coordinates of the harmonic oscillator. So what is important here is that a1 and a2-- forget about P,Q now. They are equivalent. But for the electromagnetic field, a1 and a2 have a very direct interpretation. They are called the two quadrature operators. And what I mean by that becomes clear when I use the Heisenberg representation for the electric field. And I'm here using the formula which is given in the book of Weissbluth, in page 175. Some pages copied from this book have been posted on the website. So we have our normalization factor, which is related to the electric field of a single photon. We have the polarization factor. But now we have an expression which involves the quadrature operators, a1 and a2. Just to be specific, we are not in a cavity here. Therefore, we have propagating waves cosine (kr) sine (kr). But you can also immediately use a similar expression for the case of a cavity. Let us specify that r equals 0. And then we realize what the two quadrature operators are. a2 is the operator which creates and annihilates an electromagnetic field, so to speak, photons, which have an electric field. Which oscillates as cosine omega t. And a1 is the quadrature operator for the sine omega t component. So therefore, if you simply analyze the electric field, what is cosine omega 2 is related to the a2 quadrature operator. The sine omega 2 oscillation is related to the a1 quadrature operator. So life would be easier, but more boring, if you could create a pure cosine, or pure sine oscillation of the electromagnetic field. But you can't because a1 and a2 do not commute. And there is an uncertainty relation that delta a1 times delta a2 is larger or equal to 1/2. And we have the equal sign for coherent states alpha. So therefore, if we look at the electric field, you know, everything moves around periodically in r because it's a traveling wave and t because it's an oscillating wave. So let's not confuse things with simply peak r equals 0. We've already done that. But now let's peak t equals 0. That t equals 0. The sine omega t is 0. And therefore, the distribution for a2, the expectation value, and the variance for the quadrature operator, a2, can be simply read off by looking at the electric field. So, in other words, at t equals 0, the electric field, which is obtained by projecting our quasi probabilities on the y-axis, gives the expectation value for a1 and the variant, delta a1 squared. Somebody says 2. That t equals 0. Yes. Yes. And now, if you want to see what the other quadrature component is, well. We just wait a quarter period until the sine, which was 0, is maximum. And the cosine omega t is 0. So therefore, it's pi over 2 omega, using the projection on the y-axis, gives us a1. And delta a1 squared. Or, alternatively, we don't need to wait. We can do t equals 0. And we can project onto the x-axis. Let me just throw few things into this diagram. If you had a classical motion, which would simply be cosine omega t. Then that would mean, if you had a motion which where only cosine omega t-- yes, it would be a point on the y-axis. However, classically we can never have something which is just cosine omega t. The point has to blurred into a circle. This is the coherent state. So this coherent state has now-- let me just call it 1, for the sake of the argument-- the point would be the classical oscillator. It is just cosine omega t. Everything is deterministic. No uncertainty. No nothing. Of course, it means that the time of evolution, it goes in a circle. But this is what everything does in an harmonic oscillator when time evolves into the e to the i omega t factor. So let's not get confused with it. Let's just look at t equals 0. And we input [INAUDIBLE] if r equals 0, the classical oscillator is one point. But now, we have a spread here. This says that trying to mechanically the amplitude of the cosine omega t term is not entirely false. It's not [INAUDIBLE] sharp. There are fluctuations. And in addition, we have ellipse in this direction, which we project onto the x-axis. And this tells us what the distribution in our ensemble, in our kind of mechanical ensemble, is for the amplitudes of the sine omega t motion. So the best we can do is try to mechanically-- if you want to have something, which is really just cosine omega t. We have to squeeze it, that the cosine omega t amplitude is now extremely sharp. But the sine omega t amplitude in the ensemble is completely smeared out. So this is what we're talking about. Now, what I think has confused some of you is what I thought was a wonderful example. The classical squeezing experiment. I mean, these are visuals which will be in your head forever, when you saw Professor Pritchard with a circle pendulum he's just pulling. And then the circle squeezes into an ellipse. And it seems that something here is squeezed in real space. But this is actually wrong. But how you should have looked at the experiment, and I made a comment about it, but maybe not emphatically enough. You should have really thought about a single pendulum. And this single pendulum, if it has an [INAUDIBLE] phase, is in a superposition of sine omega t and cosine omega t. And if you pull on the string, if you shorten and lengthen the pendulum, it's sine 2 omega t. You will amplify the prefactor in front of sine omega t. And you will exponentially de-amplify the factor in front of cosine omega t. So therefore, what will happen if this pendulum oscillates-- and let me say with a phase, well, sine omega t plus delta, if delta is 90 degrees. Cosine if delta is 0. It's sine. And let's say this pendulum oscillates at 45 degrees. Sine omega t plus 45 degrees. If you now parametrically derive it with squeezing action, it would now mean that you, let's just make it a sine convention. You de-amplify the cosine. You amplify the sine. And after a while, instead of oscillating with sine omega t plus 45 degrees, it will oscillate with an amplified amplitude at sine omega t. This is what you have done. And this is a mechanical analogy. There is, of course, no squeezing in any way because in a classical pendulum, we start with one definite value. If you prepare the system well. And then, we just change the motion. We amplify. We pick out a phase and that's what we are doing. Now the true ways how classical squeezing can come in. One it is if the motion of the pendulum is-- maybe there is an uncertainty. Maybe Professor Pritchard did experiments with an ion trap. And actually, 20 years ago, he published a [INAUDIBLE] [? letter ?] on classic squeezing. And you think, how can you publish [INAUDIBLE] [? letter ?] on classical squeezing? Well, he had developed the world's most accurate ion trap, measuring atomic masses with 10 and 11 digit positions. And what was actually one limiting factor was for Kelvin, the thermal distribution of harmonic oscillator modes. And so, he didn't have just one clean amplitude. The sine omega t amplitudes had a spread because of the thermal distribution he started from. And so what he then did is, by simply classical squeezing by doing classically with the ion trap exactly what he did with the pendulum, derive sine 2 omega t, he could now take this classic distribution. This is a classic distribution. In one axis, it's a distribution of amplitudes of the cosine motion. And here it's the classic distribution of the amplitudes of the sine motion. And he was squeezing it into this direction. So he had a narrower definition of the coefficient for the cosine omega t motion. And as I will tell you today, you can now do a homodyne measurement, which is reducing the noise. So essentially, he prepared, quote unquote "Effectively a [? code ?] ensemble by squeezing the uncertainty in the cosine prefactor." Or at the expense of increasing the prefactor, the uncertainty, the variance, in prefactor of the sine omega t motion. Finally, you all saw something visually. You saw how a circular motion became a linear motion. So what was going on here? Well, I mentioned to you that the circular pendulum actually has two modes. These are two modes of the harmonic oscillator. And I'm not talking about two modes of the harmonic oscillator. Everything we're discussing here is about one mode of the harmonic oscillator. The circular motion of the pendulum was just a nice visualization trick that, if the pendulum moves in a circle, you have a degenerate harmonic oscillator. One is excited with sine omega t. The other one is excited with cosine omega t. And instead of doing two experiments, if you would start with sine omega t, and you parametrically drive it, you amplified it. If you start with cosine omega t, you could bring the pendulum to a stop. But instead of doing two experiments, Professor Pritchard just did one. And he showed that the sine omega t motions became larger. And the orthogonal cosine omega t motion shrank. And therefore, you saw that the circular motion, which was a superposition of sine and cosine, became [? pure ?] sine motion. But the fact that there was something we could see in the spatial domain was simply due to the fact that we had two experiments in one. Two versions of the same harmonic oscillator, one in x and one in y. And then, when we did the experiment, we saw something visually in the spatial domain. So that's why we saw squeezing in the spatial domain. But you should really think about it. What the whole action is, is it's an interplay of de-amplifying prefactors of cosine amplifying prefactors of sine. And if the prefactor has a distribution, by de-amplifying it, you also shrink the reach of the distribution. And this is what we call squeezing. Yes, [INAUDIBLE]? AUDIENCE: Stupid question. So the operators, a1 and a2 here, right? You use those instead of a and a dagger because you use cosine and sine rather than [? e to the i omega t ?] [INAUDIBLE]? Because they should contain [INAUDIBLE], right? PROFESSOR: Let me go back to the definition. They are actually exactly they correspond exactly to position and momentum of the mechanical harmonic oscillator. AUDIENCE: Oh. That makes sense. Another thing is, technically speaking, we could call the [? Basco ?] electric field e cos kr minus omega t as maximum squeezed if we only had the cos components? PROFESSOR: Well, not if I use it-- it depends how I define squeezing. So you would now give a definition of squeezing which says that the variance in a1 is now unequal to the variance in a2. So the classical oscillator is the point. It has 0 variance in a1. 0 variance in a2. But as I said, you can actually apply all the way to a classical oscillator if you add technical noise or thermal noise. Then your system is prepared, not with a sharp value, but with a distribution which is simply may be [? false ?] distribution, due to the preparation. So recall, it's squeezing when the noise in the amplitude of the sine motion is not equal to the noise in the amplitude of the cosine motion. Some people say, if it's a little bit narrower, they apply squeezing to the situation that we are uncertainty limited. And then we squeeze. But of course, you can always reduce the noise in your system by just preparing the system. By cooling the system. By selecting the system for measurements, until you reach the quantum limit. So you can get a smaller delta a1, a smaller delta a2, without squeezing, just better preparation. Or by selecting your ensemble. So squeezing in a narrower sense only makes sense when you hit the limit of what quantum mechanics allows you. And now you want to distribute the variance unequally between a1 and a2 because then you can get something in delta a1 or delta a2. Which is better then 1 over square root 2. And this is now really quantum mechanically squeezed. But they're both definitions. Classical squeezing exists. It's just not as common as quantum mechanical squeezing. Other questions? Yes? AUDIENCE: I remember that you said that in the classical squeezing, you are attenuating one amplitude. And you were amplifying the other amplitude. So, in this picture then, shouldn't we have the ellipse come down from the circle on the y-axis? PROFESSOR: OK. AUDIENCE: [INAUDIBLE] not just changing delta a, but also changing b [INAUDIBLE]. PROFESSOR: Let me just get a sketch up here. So this is a2. This is a1. a2 is for the cosine omega t. And a1 for the sine omega t. So just to be specific. So you want to prepare an harmonic oscillator, which is just sine omega t. This is a point here. If we are now parametrically-- so this has a value at t equals 0. Our a1 is not. So if you are now squeezing your classical harmonic oscillator, you would have a situation where a1 of t is s naught times e to the plus or minus t, depending on whether you do the parametric [? drive ?] at sine 2 omega t or cosine 2 omega t. So therefore, what would happen is this point will be amplified. That would mean it would just move out on the x-axis. So this would be for the plus sign. Or, for the other case, you would damp the motion to 0. And this is a minus sign here. AUDIENCE: OK. And you were also saying the variance-- PROFESSOR: A point does not have variance. AUDIENCE: [INAUDIBLE]. PROFESSOR: So if you want to build a variance, you need, let's say, three points. One is the average value. One is the left outlier. One is the right outlier. And what happens is now, as you amplify the motion, you would also amplify, magnify the distance between the points. And if you de-amplify it with a minus sign, the distance between the points would shrink because all the three points converged to 0. So pretty much what I just told you for the three point ensemble. You can now use it and construct any initial condition you want. And see what happens due to squeezing. Other questions? Good. So the question now is, how to measure squeezing. How to take advantage of squeezing. So the situation we are facing is the following. Let's assume we have done a nice squeezing job. And that means that we have a sharp value. We have created a narrow distribution of the cosign omega 2 coefficients. So the cosine omega 2 motion is rather sharp. But we also know that the electric field itself is sharp at this moment. But since this ellipse rotates, the electric field will have enormous uncertainty a [INAUDIBLE] period later. So if we want to take advantage that we have squeezed the electromagnetic field, they are now a couple of ideas which we can use. One is we could just measure the electric fields stroboscopically. We would just make a setup, where we look at this system in a [INAUDIBLE] measurement process. Only, we only measure the electric field when the ellipse is like this or is like this. And therefore, we have a sharp value of the electric field. But instead of doing a stroboscopic measurement, we can do something else. Remember, we have a distribution of cosine omega t and sine omega t. And we have a distribution of co efficiency and s. And we know we are interested in the cosine omega t. So how to pick that out has actually been solved in early [? radius ?]. You do a homodyne detection. In other words, you take a reference oscillator, which is strong. B of t is b naught times cosine omega naught plus delta. And if you if you multiply the two signals, your signal you are interested in, or at least you're interested in one comportment, you multiply it with your local oscillator. And then you [? indicate ?] over time. Then, of course, when you peak the phase delta to be 0, cosine omega t times cosine omega [? dt ?] gives cosine squared omega t. It averages to 1/2. So you would times average where cosine omega t times sine omega t averages to 0. So for delta equals 0, you project out the cosine component. And for delta equal to 90 degrees, you peak out the sine component. So therefore, you can have a measurement, it's a phase sensitive measurement, by multiplying your signal with the local oscillator, where you're only sensitive to the component you have squeezed. And therefore, your measurement uncertainty has now been reduced by the squeezing factor. AUDIENCE: [INAUDIBLE] only that should be the [INAUDIBLE]. PROFESSOR: Yes, actually, homodyne means we use the same frequency. Heterodyning would mean we use two different frequencies. But I'm not talking about that. So we have to use exactly the same frequency here. AUDIENCE: So instead, this [INAUDIBLE] oscillation also needs a laser to [INAUDIBLE]? PROFESSOR: Yes. So, to address your question, Angie, what usually happens is you start with one laser in those experiments. You frequence the top of the laser. If you wanted to do some squeezing, you remember that we need a parametric oscillator, where one energetic photon gives us two photons. So what you do is you start with a laser, [INAUDIBLE]. A 1064 nanometer. You frequency double it to green laser. The green laser pumps your parametric oscillator. And then you get, for down conversion, you can squeeze light at 1064. But this is because you first stopper the laser. And then you break the photon into new pieces. It has exactly the same frequency as your laser you started with. And this laser is in the local oscillator, or the reference clock, for your whole experiment. So everything in your experiment, the double laser, the parametrically down converted beams. Everything is related to the single laser you started with. And everything is phase coherent. So that's how, usually, the experiment is done. Before I tell you what we're doing quantum mechanically, let me just also get another question out of the system, which I've been asked several times. People ask me, well, the problem is that the ellipse rotates like this. Isn't there a way-- now I need my hand-- that we can have an ellipse rotating like this? And that would be great. But this is sort of unnatural because the harmonic oscillator does that. So if you wanted to do that, you need an operator which is really, at every cycle of the electromagnetic field, is when the light want to sort of do this. No. Always push it back. And this is impractical. You need, really, an oscillator which would completely change the quadrature components of your harmonic oscillator in every single cycle of the electromagnetic field. But sort of what homodyne detection is, instead of now forcing the light to stay aligned to sort of do this, which is very unnatural, we allow the light to freely evolve. But we have now an observer, our local oscillator, which is rotating synchronously with the ellipse. So we have a local oscillator which is cosine omega t. It does, so to speak, exactly what the ellipse is doing. So in that sense, the local oscillator allows us, now, to observe the ellipse always from its narrow side. Because the local oscillator is [? cooperating ?]. But the mathematics is pretty much the [? Fourier ?] transform. The mathematics is a [? Fourier ?] transform. The physics is the physics of a [INAUDIBLE] detector. OK, now the only question remaining is, how do we mix? How do we get a product of our signal and the strong local oscillator? In an old radio, it's done by an element. Maybe a diode, which is a nonlinear circuit. If you drive a nonlinear element with two input sources, you get something which involves the product of the two. OK, so the principal of homodyne detection is now that we want to mix light at the beam splitter. So the device which does all that for us is, after using so many words, I would say it's disgustingly simple. It's really a half [INAUDIBLE]. We talked about beam splitters a lot here because beam splitters perform wonderful unitary transformations. And we'll exploit them for many purposes. For the purpose of this lecture, I simply assume that we have a 50-50 beam splitter. So, what I'm telling you now about the beam splitter will be generalized. Either later today, or in the lecture on Wednesday. So a beam splitter has two input ports and output ports. We have light impinging on the beam splitter. We call those modes a and b. And after the beam splitter, we have two output modes. And let me call them a0 and b0. And what we measure is the output of the beam splitter. So we have two photo detectors. And we measure the output. OK. The output modes, a naught and b naught, are simply obtained by taking the input modes and propagating them. We have two modes, a, b. You can say, what goes vertically is a. Vertical is a. We call it a naught. Here, it's b. It becomes b naught. And what we have to do is we have two transform the operator a naught. Let me just make a comment. Sometimes in the beam splitter, you want to think a quantum state comes and is transformed. But instead of transforming the state of a photon, I can also transform the operator, which creates a photon. One is the Heisenberg picture. One is a Schrodinger picture. So right now, I'll use the Heisenberg picture. I have two operators, a, b. Before the beam splitter is set, I have two operators, a naught, b naught. Afterwards, I have two other operators. I have operators a, b. Afterwards, i have operators a naught, b naught. And, if you do the time evolution, it's a unitary transformation. And for operators, we have to multiply the operators from the left and right hand side, with unitary operator, u and u dagger. And we really talk about the beam splitter in its full beauty in a short while. It will be a special case of what we discuss later. But I think it's pretty obvious that a 50-50 beam splitters is simply creating two modes. One of the symmetric combination. And one of the anti symmetric combination. OK. So in our homodyne detector, we are now measuring the number of photons in the mode a naught. We measure with the upper detector, the number of photons in the mode b naught. But what we are then doing is we run it through a different [INAUDIBLE]. We want to cancel certain noises, as you will see in a moment. This is why we obtain the difference signal, which we call I minus. So let's now calculate what I minus is. Well, the number of photons in the mode a naught is a naught dagger a naught. That's the operator for the number of photons. We subtract b naught dagger b naught. And, as you can immediately see, is that it because of the beam splitter, this is now involving a product of a and b. ab dagger plus a dagger b. So in other words, when you ask yourself, how can we-- remember, I said we peak out the cosine omega t component by multiplying our signal with the local oscillator. And you would say, how do you multiply two modes of the electromagnetic field? Well, just send it to a beam splitter. In a beam splitter, you create the sum of them. But then your photo detector is square [INAUDIBLE] detector. You measure the electric fields squared or you have to measure the operator, a dagger a. And now, you get the [INAUDIBLE] between ab dagger and a dagger b. So this is how we multiply two operators. How we get a signal, which is a proportional to a or a dagger times b or b dagger. By the way, if it wouldn't take the difference, we get terms of aa dagger bb dagger. Just the mode a or the mode b by themselves. And we want to get rid of them. And by taking the difference between the two photo detectors, those parts of the signal are becoming mode. And are subtracted out. So therefore, this is called balanced homodyne because we have the balanced beam splitter. We measure to two signals. And then we take the difference of the two signals. OK. There's one more thing we have to add. And then we find we understand the balanced homodyne detection. We want to explore it. That our mode b, remember we want to measure mode a. a squeezed. a has quantum properties. a has only single photons. And b is just a trick to project out the cosine omega t. And we do that by choosing for b a strong coherent state, described by a coherent state parameter beta. Of course, whenever we have a strong coherent state, that means that we can replace b by beta, and b dagger by beta star. This is sort of the classical field limit of a strong coherent state. And now we can ask, what this our different signal I minus? Well, the coherent state depends on the phase angle theta. So it will be phase sensitive. It will depend on the angle theta. But now there is one more thing. And that is, if you use a stronger coherent state, of course both of our outputs go up in proportion to beta. So therefore, we want to go to a normalization by dividing by beta. So this is now our normalized output. I'm looking at this. So this is our operator for the signal I minus. And that is a. b has been replaced by, b dagger has been replaced by b star. That gives us a times e to the-- because of the complex conjugation-- minus i theta plus a dagger times e to the i theta divided by 2. So therefore, if we choose the phase to be 0, we measure this symmetric combination, a plus a dagger over 2. And this is-- just comparing notes-- simply the a1 quadrature operator divided by square root 2. And if you put a phase shifter, just a dispersive element, into the strong local oscillator and shift the phase by pi over 2, by delaying the light pulse by a quarter wavelength, what we are now projecting out is the other quadrature component. So therefore, using the beam splitter and the local oscillator, we can now measure the expectation value for a1 and for a2. I've posted a few papers on the website which show some of the pioneering experiments where people did exactly that. So then, when they observed the quadrature comportment, which was squeezed, they usually did it for a squeezed vacuum, so a1 and a2 had 0 expectation value. But the interesting part is, how much noise was there? If you don't squeeze, you find that your normalized noise simply corresponds to the classical photon shot noise. But if you are squeezed, you find that one quadrature comportment has a smaller noise than shot noise. So [INAUDIBLE] photons in your different signal. And your noise is less than square root n. Where as in the other quadrature, the component is larger. So you can look up on the website papers where people slowly varied the phase of the local oscillator. And you see, this is shot noise, how the noise is below shot noise. Exceeds shot noise. It's below shot noise. And exceeds shot noise. And this was the first evidence for the generation of squeezed light. Questions? OK. So we have discussed the detection of squeezed light using balanced homodyne detector. And balanced means that the beam splitter was 50-50. So now we are ready to discuss the unbalanced case. So let's get our unbalanced beam splitter and, just for the sake of the argument, let's say it has a really good transmission of tt. So transmission coefficient t squared tells you what fraction of the power is transmitted. And let's just assume for the sake of the argument, 99% are transmitted. So therefore, if we start with our signal a, which may be [INAUDIBLE] squeezed quantum state, number state. You name it. a naught is pretty much the same as a. We haven't lost so much. But now we use our local oscillator, which is very strong. Only a very small fraction of it will be reflected. But we can always compensate for the small reflection by making b even stronger. Let's see what we get. So the mode, a naught, the output mode, is now the transmission coefficient times a. a and a. The operators are amplitudes. So we have to use, if you have a 99% beam splitter, we have to take the square root of 0.99. This is the transmission coefficient. And now we have the transmission coefficient for the strong local oscillator, which we will again approximate by its eigenvalue. Coherent state eigenvalue. And the reflection coefficient is 1 minus t squared. So if we make the approximation that t is approximately 1, we can take it out off the parenthesis. And we obtain that. If t is close to 1, we can neglect it. And what we see now is what we have obtained is actually the original quantum state a. Or I should say. The operator. I'm in the Heisenberg picture. a is the mode operator for the input of the unbalanced homodyne detector. And what we have simply done is we have displaced the operator. We have displaced the mode operator a. And the displacement operator has an [? argument ?] of 1 minus t squared beta. And this is a Hermitian conjugate. 1 minus t squared beta. So what I've shown you here is that the local oscillator and the beam splitter is the realization, or implementation, of the displacement operator. So in the limit that t goes to unity, we are not losing anything of out quantum state. But by reflecting in the amplitude of a strong coherent state, we simply take our quantum state, and we displace it in this two dimensional plane. So that's what this beam splitter does for us. Any questions? In the next few classes, we really take advantage of those elements. We know now that the displacement operator is just a beam splitter. We know when we have squeezed some light by using balanced homodyne detection, we can just do measurement of one quadrature component or the other. So what I hope for those of you who haven't heard about it, what you take away from that is that these are extremely simple elements. But by combining them, we can really realize very sophisticated schemes of quantum optics. To some extent, when I heard about it for the first time, I think, the mathematics look so fancy. I couldn't believe that such simple elements can actually realize what those operators describe. So I gave you the example for the displacement operator. When I learned about the beam splitter, and its underlying physics, there was one thing which really fascinated me. And this is the most simple element you can think of. I mean, what is simpler than a beam splitter? A beam splitter has two inputs, two outputs. The simplest optical element is just an attenuator. Put in a window in your laser beam and you lose 4% of your power preserves. Or put in just a little bit of dirty optics and you lose a few percent. So what I want to discuss with you now is, what really is an attenuator quantum mechanically? Well if, classically, the attenuator would do the following. An attenuator is a device which has a transmission, which is a transmission coefficient squared, which is smaller than unity. And, in a classical system, if you have a coherent state, you would simply assume that the coherent state gets multiplied by the transmission coefficient. In other words, that you have your original state, described by this phase, or alpha. And then, the action of the attenuator would simply be to scale everything down by a transmission effect of alpha. So the picture you should have is the following. You have a coherent state. It gets attenuated by the transmission coefficient. But if there are fluctuations about the coherent state, also the fluctuations get attenuated. Because everything gets attenuated by this attenuator. So if you look at that, you should immediately say, no. This is quantum mechanically forbidden. Because a coherent state with a minimum uncertainty state, this shaded area cannot be smaller than 1/2. But here it has become smaller. So what I've shown to you here is it's a violation of quantum mechanics. It would actually mean, let me just give you the example. It would mean that if-- yeah. The coherent state is quantum limited. And if you calculate what is the fluctuations in the photo number, it's a shot noise. So just to give you sort of simple, intuitive example, if you have 10,000 photons plus minus 100, it's a shot noise. Square root n. If you could now attenuate it by a factor of 100, and you would go from 10,000 plus minus 100, to 100 plus minus 1. That's much better than the shot noise. I mean, this is what I'm just telling you what a simpleminded attenuator would do. And you would immediately say, that's too good to be true. I cannot get sub shot noise light. So what is wrong? Impossible. Not allowed. Well, we've just tried to formulate something intuitively. And we have to be careful. Well, we know already one way how we can attenuate an input beam. And maybe we should go back to the situation and analyze it. We know what if we can attenuate it with a beam splitter. And this beam splitter has a transmission coefficient of t. And then we get our transmitted coherent light. There is something getting reflected. But now we realize that this beam splitter is not just a device which has one input. It has another input. And you may say, well, I don't care about the other input. I don't want to use it. Well, if you don't want to use the input, it has the vacuum state. So therefore, if you would realize the attenuator with the beam splitter, it would mean that, in addition, and this is what the math really shows, in addition to the attenuated coherent state, which mathematically is also attenuating the fluctuations, you have to add something. Which is the reflection coefficient times the vacuum state. Which is this. And if you now correctly do the math, if you add the two together, you find you get a coherent state which has an amplitude of t alpha. But has the correct [INAUDIBLE] fluctuation is again a minimum uncertainty state. So the disk of your attenuated state hasn't exactly the same area as the unattenuated state. So I've shown you the physical part. I have shown you the graphical solution. The math is very simple. But I really want you to do the math yourself. This is a new homework problem we designed to illustrate the physics. But what it tells you is the following. If you take a neutral density filter out of the lab [INAUDIBLE] and say, this is not a beam splitter. This is an attenuator. Sorry. You cannot simply attenuate a quantum mechanical mode. This is not a unitary time evolution. What you attenuate [INAUDIBLE] is, without you knowing it, it couples the electromagnetic wave to whatever. To the heat path of the [INAUDIBLE], which is in your attenuator. I don't even want to describe it. But you're not circumventing the limitation of the beam splitter. Whenever you attenuate, whenever you have a laser beam, and it undergoes losses, when you send a laser beam through the atmosphere, and it undergoes some losses by, who knows, [INAUDIBLE] scattering through the air or something like this. That means you get an attenuated coherent state. But you couple in the fluctuations off the vacuum. And this establishes shot noise now, at the lower level of intensity. So your attenuator is not a single state device. Dissipation. Attenuation really means you connect with other parts of [INAUDIBLE] space. You cannot attenuate in a small part [INAUDIBLE] space. This is impossible. This is not unitary time evolution. So what I've just told you has dramatic consequences for any form of non classical, or squeezed, light. And that's the following. Most of you are experimentalists. And you know that when you run a laser at one or two watts, you send it through optics, and shudders, and optical fibers. How much do you get at the end of your experiment? Not even 50%. So, whenever you create light, and then you want to do something, you lose some of the light. And let's now assume that you have done what was really a breakthrough in scientific headlines within your lifetime. You have generated squeezed light. And now we want to use the squeezed light. Shine it on atoms and do a precision measurement, which is better than the standard quantum limit because you have squeezed the ellipse. And you want to now exploit the sharpness of the ellipse. What happens to your aspect ratio of the ellipse when your beam is attenuated? So let me just discuss it with you graphically. So let's assume we have a squeezed state. And we send it through an optical fiber. The result is you will never send a squeezed state through an optical fiber. But I want you to realize that. So let's assume we have our squeezed state, symbolized by that. Let's use some red color for the state. And now, in the time evolution, we have some [INAUDIBLE] scattering. Some fiber absorption. But you know already, the absorption is in reality a beam splitter, which happens in the vacuum. So We have now some finite transmission coefficients. And that would mean, which is the bad news, that your ellipse gets shrunk. It shrinks. It shrinks by the transmission factor t. But that's what I did bad. You lose some of your power. But the really bad thing is that if I multiply it with the reflection coefficient, you couple in the vacuum. Oops. I'm going to change to red. And, as a result, since the noise in the vacuum is equal in both quadrature components, you've worked so hard to squeeze it, to make it asymmetric. But what you get now is from your ellipse, you get something which is much more egg shaped now. So in other words, you can write it down with operators. But once you understand what is going on, you immediately realize that losses reduce the squeezing. And this is a challenge to all the experiments using squeezed or non classical light. And you see how the scaling works. If you have squeezed your ellipse by a factor of 100, even 1% of the vacuum will spoil your squeezing. So the more you have squeezed, the more non-classical the light is-- the more valuable it is. The more sensitive it is, to even very small losses. Questions? OK, good. Now we have 10 more minutes. What I want to do now is, I want to show you that the language which we have used, the methods we have introduced, can now be used for something which is really cool-- teleportation. I want to show you how balanced homodyne detection, quadrature measurement, and displacement operator can be put together to generalize is scheme which is a scheme for teleporting quantum states. This is an application of squeezing, homodyning, and all that. Teleportation of light. Let me just illustrate what the problem is. I know we have only 10 minutes, and I've decided not to write down all the math for you. I went, actually, to the Wikipedia page, and corrected a few mistakes in the equations and edited a few explanations. I think you can just sit down and read it by yourself. What I want to explain to you, here, is the physical concepts behind teleportation, and you the crucial steps to realize teleportation. So first, what is teleportation? Well, teleportation has a sender and a received, which in quantum information science are called Alice and Bob. Teleportation means Alice has a quantum state, psi. And she wants to send this quantum state, psi, to Bob. In other words, you have, maybe, some squeezed light. You have something-- a quantum state, psi-- which is interesting. And the question is, how can Bob create an identical copy of this quantum state, psi, that he can sort of [INAUDIBLE] with the same quantum state. Before you appreciate teleportation, you have to realize what the problem is. The problem is, really, in fundamental properties of quantum systems, fundamental limitations-- what you can do with a quantum system. First, I have to tell you what is allowed, and what not. In this game, teleportation means, of course, you will not take your atom and your photons in the state psi, just propagate them to Bob, and Bob has them. That's trivial. I can send any quantum state to you by transmitting an atom or a laser beam to you, and you have the same quantum state I had earlier. This is not teleportation. This is trivial propagation. What is meant is that we don't have a quantum channel-- I'm not allowed to send my quantum state to you. But in teleportation, I can do a measurement on the quantum state, call you up through a classical communication channel, and tell you, the result of my measurement is such and such. And then you would say, well, if I have a spin system, and I measure spin up-- I call you and say, my measurement was spin up, and you create a spin up system. Isn't that teleportation? The answer is no, because maybe the state I had was a superposition of spin up and down. And what I measured is only spin up. And by telling you it's spin up, you would never create a superposition state, you would just create a spin up state. So the effect is, that if I do a measurement on my quantum state, and report my results to you, you have insufficient information. Because a projective measurement on a quantum state inevitably leads to loss of information. A measurement on a single quantum state does not create enough information to recreate the quantum state. Of course, there is an obvious solution. When we want to obtain information about a quantum state, we often, in quantum mechanics, have to do many measurements. And we can take a spin state, we can measure what is its x, what its y, what is the c component of the spin. We can completely characterize the spin state, and then we know everything about it. But the problem is, we have only one quantum state, and we can't do repeated measurements. Then you would say, well, the next thing is, why don't you just take your quantum state and Xerox it, make many, many copies. And then you have an ensemble, and you can take as many measurements you want. You can do an x measurement, p measurement. I mean, you can reconstruct the complete wave function. You can measure with an x basis. You can measure with a momentum basis. You can collect all the information. But the problem is-- otherwise the whole teleportation would not be an issue at all-- there is the no-cloning theorem. You cannot duplicate a quantum state. If you have an atom in a certain quantum state and another atom, it is quantum-mechanically forbidden-- there is no unitary transformation, no way of creating a situation that you have one unknown quantum state, plus another atom, or another light beam, and after some interaction you have two times the same quantum state. You cannot clone. Therefore, based on all that, we cannot clone the quantum state. We are only left with the one copy of the quantum state, which Alice has. Alice is not allowed to send it to Bob. Maybe Bob is on the other side of the ocean, or on another planet. All what Alice can do is, she can do one measurement and tell Bob, this is my measurement. So this is the problem of teleportation. How is that possible? Well, the way how I put it, it seems impossible. But there's a way out of it. Let me just write down what I said. So the goal is now-- Alice performs measurement, reports result to Bob. And now, Bob will recreate the quantum state. What I've just said is, to perform a single measurement is not enough information. So this is not enough. We need one more resource. And the resource, which is now used to use quantum teleportation, is that you take some entangled system, or-- and this why I talk about it today-- or squeezed light. That's the same thing. It's a form of entanglement. I told you that when we generate squeezed light with a parametric down conversion, the parametric down conversion takes a green photon and creates two identical infrared photons. Until now, we have discussed that those identical infrared photons go into the same mode, which is squeezed. But now, a slight extension of this concept would mean, in parametric down conversion, you squeeze something, but one photon goes to Alice, one photon goes to Bob. So now, Alice and Bob have an additional resource. They sort of own-- each of them-- half of the two twin brothers, which are the photons created in the parametric down conversion process. And that will work. So the idea is the following. To create those twin beams of photons is simply done with an optical parametric oscillator. There is one extension, which is described on the Wiki. I won't have time to explain it to you. But it's a two-mode OPO. It puts the two identical photons-- not in the same beam, as we did before, we had the a dagger squared, a squared operator-- they go into two different modes. You know the magic of beam splitters by now. Now, two beams come out. One goes to Alice, one goes to Bob. If Alice would take her input state-- this unknown state which is handed to her by Victor, somebody else who participates in the game-- we know already, if Alice would perform a measurement, the quantum state would be destroyed. The result of the measurement-- let's talk about spin-1/2-- would only be one spin projection. It's not enough to reproduce a state. But what she is doing is, she uses, again, the magic of the beam splitter. So one of those twin brother photons is now mixed with the unknown quantum state at the beam splitter. And the output of the beam splitter is now entering the balanced homodyne detection, which we just discussed. The output of this beam splitter-- and there are two outputs-- both of them is now becoming part of a balanced homodyne measurement. And you see the ingredients. So this, and this, will be measured. The other input for the balanced homodyne is a strong local oscillator. The phase of the local oscillator is chosen, in one case, that you measure the x or a 1 quadrature component, or the p or the a 2 component. So now what Alice has done is, by using this balanced homodyning, she has-- with this local oscillator-- has actually now performed two measurements. The quantum state is destroyed, but here she gets an x value, and here she gets a p value. So how the magic works out. It's really just a few lines of mathematics, now. These were sort of, you know, twin brothers. But it wasn't clear in which quantum state the twin brothers are. If you write it down, and I can show you the formula in a moment, these are twin brothers. But those twin brothers-- this is Brother 1, this is Brother 2-- are in sort of a continuum of states. And when Alice this would measure that this twin brother is in state big X, that would be a projective measurement. And Bob's twin brother would now, with certainty, also be in the state, X. So therefore, what happens is, the measurement of x and p is now producing-- is now, through the measurement process-- putting the other photon, or the other beam, the other twin brother, into a specific quantum state. And if you look for few lines of math, the magic is that the quantum state-- which is now here, with Bob-- turns out to be a displaced copy of the original state. And the displacement depends on x and p. So if Alice now tells Bob, hey, I measured x and p, and Bob is now [INAUDIBLE] his displacement operator-- remember, a displacement operator is nothing else than an unbalanced beam splitter, with a huge local oscillator as an input. So if Bob is now setting up his displacement operator, it makes a displacement which depends on x and p. He can take this other twin brother, shift it back, and he will exactly regenerate the quantum state which Alice had. So this is, now, how a quantum state can be transmitted without having any quantum channel for transmission. You're not propagating the quantum state. You use classical communication, but the resource you use is some EPR pairs, or squeezed light. It's a few lines of equations, but I don't have time to go through it. They're really annotated in a way that I think it will be an enjoyful reading for you. Any questions? Time is over. OK. A reminder for those who came late-- this week we have three classes, Monday, Wednesday, Friday. Have a good afternoon.